Test Report: Docker_Linux_crio 17223

                    
                      f9ecce707d93fa4241f904962674ddf295a62997:2023-09-11:30961
                    
                

Test fail (6/298)

Order failed test Duration
25 TestAddons/parallel/Ingress 154.42
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 181.17
204 TestMultiNode/serial/PingHostFrom2Pods 3.43
225 TestRunningBinaryUpgrade 65.74
250 TestStoppedBinaryUpgrade/Upgrade 95.07
264 TestPause/serial/SecondStartNoReconfiguration 47.21
x
+
TestAddons/parallel/Ingress (154.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-387581 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-387581 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-387581 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [28c02d09-f453-43e0-a430-8faf21ab502a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [28c02d09-f453-43e0-a430-8faf21ab502a] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007996785s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-387581 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.288870155s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-387581 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-387581 addons disable ingress-dns --alsologtostderr -v=1: (1.03800701s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-387581 addons disable ingress --alsologtostderr -v=1: (7.621791373s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-387581
helpers_test.go:235: (dbg) docker inspect addons-387581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c",
	        "Created": "2023-09-11T11:09:50.682718092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 144939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:09:50.94813491Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
	        "ResolvConfPath": "/var/lib/docker/containers/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c/hosts",
	        "LogPath": "/var/lib/docker/containers/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c-json.log",
	        "Name": "/addons-387581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-387581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-387581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3be179c81e61be52f88eec87eeee1d43c6622c34ad6f13a507ec4ccf9cd5ea9d-init/diff:/var/lib/docker/overlay2/5fefd4c14d5bc4d7d67c2f6371e7160909b1f4d0d9a655e2a127286f8f0bbb5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3be179c81e61be52f88eec87eeee1d43c6622c34ad6f13a507ec4ccf9cd5ea9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3be179c81e61be52f88eec87eeee1d43c6622c34ad6f13a507ec4ccf9cd5ea9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3be179c81e61be52f88eec87eeee1d43c6622c34ad6f13a507ec4ccf9cd5ea9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-387581",
	                "Source": "/var/lib/docker/volumes/addons-387581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-387581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-387581",
	                "name.minikube.sigs.k8s.io": "addons-387581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be69872cf143b5d2795db9b071d7eddfb9e9481dd4a99375625348161f396c46",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be69872cf143",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-387581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "812f30ff51f0",
	                        "addons-387581"
	                    ],
	                    "NetworkID": "c3678f6ad05d413919dabaa9e50f757b1fda7e4502e71d517d4448227c1a9be7",
	                    "EndpointID": "387517bbecd158ff3aa49d69962cb830a05161b721cc5fdc6a84b948befd3677",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-387581 -n addons-387581
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-387581 logs -n 25: (1.180122754s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-804318   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |                     |
	|         | -p download-only-804318        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-804318   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |                     |
	|         | -p download-only-804318        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| delete  | -p download-only-804318        | download-only-804318   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| delete  | -p download-only-804318        | download-only-804318   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| start   | --download-only -p             | download-docker-028771 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |                     |
	|         | download-docker-028771         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-028771      | download-docker-028771 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| start   | --download-only -p             | binary-mirror-929527   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |                     |
	|         | binary-mirror-929527           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37375         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-929527        | binary-mirror-929527   | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| start   | -p addons-387581               | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:11 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | -p addons-387581               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | addons-387581                  |                        |         |         |                     |                     |
	| ip      | addons-387581 ip               | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	| addons  | addons-387581 addons disable   | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-387581 addons disable   | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-387581 addons           | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|         | addons-387581                  |                        |         |         |                     |                     |
	| ssh     | addons-387581 ssh curl -s      | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-387581 addons           | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:12 UTC | 11 Sep 23 11:12 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-387581 addons           | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:12 UTC | 11 Sep 23 11:12 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-387581 ip               | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:14 UTC | 11 Sep 23 11:14 UTC |
	| addons  | addons-387581 addons disable   | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:14 UTC | 11 Sep 23 11:14 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-387581 addons disable   | addons-387581          | jenkins | v1.31.2 | 11 Sep 23 11:14 UTC | 11 Sep 23 11:14 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:09:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:09:28.776848  144272 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:28.777340  144272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:28.777396  144272 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:28.777413  144272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:28.777918  144272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:09:28.778921  144272 out.go:303] Setting JSON to false
	I0911 11:09:28.780140  144272 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3117,"bootTime":1694427452,"procs":695,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:28.780205  144272 start.go:138] virtualization: kvm guest
	I0911 11:09:28.782701  144272 out.go:177] * [addons-387581] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:28.784334  144272 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:09:28.784339  144272 notify.go:220] Checking for updates...
	I0911 11:09:28.785907  144272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:28.787455  144272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:09:28.788866  144272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:09:28.790325  144272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:09:28.791797  144272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:09:28.793340  144272 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:09:28.815017  144272 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:09:28.815160  144272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:09:28.873047  144272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-09-11 11:09:28.864634566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:09:28.873140  144272 docker.go:294] overlay module found
	I0911 11:09:28.875098  144272 out.go:177] * Using the docker driver based on user configuration
	I0911 11:09:28.876669  144272 start.go:298] selected driver: docker
	I0911 11:09:28.876681  144272 start.go:902] validating driver "docker" against <nil>
	I0911 11:09:28.876691  144272 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:09:28.877365  144272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:09:28.928039  144272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-09-11 11:09:28.918897342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:09:28.928238  144272 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:09:28.928456  144272 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:09:28.930485  144272 out.go:177] * Using Docker driver with root privileges
	I0911 11:09:28.932256  144272 cni.go:84] Creating CNI manager for ""
	I0911 11:09:28.932278  144272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:09:28.932293  144272 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 11:09:28.932309  144272 start_flags.go:321] config:
	{Name:addons-387581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-387581 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:09:28.933998  144272 out.go:177] * Starting control plane node addons-387581 in cluster addons-387581
	I0911 11:09:28.935368  144272 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:09:28.936638  144272 out.go:177] * Pulling base image ...
	I0911 11:09:28.937782  144272 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:09:28.937800  144272 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:09:28.937820  144272 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:09:28.937831  144272 cache.go:57] Caching tarball of preloaded images
	I0911 11:09:28.937899  144272 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:09:28.937911  144272 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:09:28.938282  144272 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/config.json ...
	I0911 11:09:28.938311  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/config.json: {Name:mkd03db64b78468b16ad6245743c22ab4a3b2d16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:28.953298  144272 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b to local cache
	I0911 11:09:28.953416  144272 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local cache directory
	I0911 11:09:28.953432  144272 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local cache directory, skipping pull
	I0911 11:09:28.953435  144272 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in cache, skipping pull
	I0911 11:09:28.953443  144272 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b as a tarball
	I0911 11:09:28.953448  144272 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b from local cache
	I0911 11:09:41.973108  144272 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b from cached tarball
	I0911 11:09:41.973154  144272 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:09:41.973204  144272 start.go:365] acquiring machines lock for addons-387581: {Name:mk9692e1165ca55ac0f59fd7c656eb1cde6f52dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:09:41.973316  144272 start.go:369] acquired machines lock for "addons-387581" in 88.126µs
	I0911 11:09:41.973344  144272 start.go:93] Provisioning new machine with config: &{Name:addons-387581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-387581 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:09:41.973497  144272 start.go:125] createHost starting for "" (driver="docker")
	I0911 11:09:41.976902  144272 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0911 11:09:41.977138  144272 start.go:159] libmachine.API.Create for "addons-387581" (driver="docker")
	I0911 11:09:41.977162  144272 client.go:168] LocalClient.Create starting
	I0911 11:09:41.977646  144272 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:09:42.135468  144272 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:09:42.229436  144272 cli_runner.go:164] Run: docker network inspect addons-387581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:09:42.244873  144272 cli_runner.go:211] docker network inspect addons-387581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:09:42.244965  144272 network_create.go:281] running [docker network inspect addons-387581] to gather additional debugging logs...
	I0911 11:09:42.244991  144272 cli_runner.go:164] Run: docker network inspect addons-387581
	W0911 11:09:42.260253  144272 cli_runner.go:211] docker network inspect addons-387581 returned with exit code 1
	I0911 11:09:42.260302  144272 network_create.go:284] error running [docker network inspect addons-387581]: docker network inspect addons-387581: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-387581 not found
	I0911 11:09:42.260317  144272 network_create.go:286] output of [docker network inspect addons-387581]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-387581 not found
	
	** /stderr **
	I0911 11:09:42.260390  144272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:09:42.275750  144272 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017987a0}
	I0911 11:09:42.275815  144272 network_create.go:123] attempt to create docker network addons-387581 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0911 11:09:42.275869  144272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-387581 addons-387581
	I0911 11:09:42.329823  144272 network_create.go:107] docker network addons-387581 192.168.49.0/24 created
	I0911 11:09:42.329859  144272 kic.go:117] calculated static IP "192.168.49.2" for the "addons-387581" container
	I0911 11:09:42.329915  144272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:09:42.344736  144272 cli_runner.go:164] Run: docker volume create addons-387581 --label name.minikube.sigs.k8s.io=addons-387581 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:09:42.361573  144272 oci.go:103] Successfully created a docker volume addons-387581
	I0911 11:09:42.361670  144272 cli_runner.go:164] Run: docker run --rm --name addons-387581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-387581 --entrypoint /usr/bin/test -v addons-387581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:09:45.310475  144272 cli_runner.go:217] Completed: docker run --rm --name addons-387581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-387581 --entrypoint /usr/bin/test -v addons-387581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib: (2.948760648s)
	I0911 11:09:45.310507  144272 oci.go:107] Successfully prepared a docker volume addons-387581
	I0911 11:09:45.310525  144272 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:09:45.310546  144272 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:09:45.310597  144272 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-387581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:09:50.617458  144272 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-387581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (5.306788142s)
	I0911 11:09:50.617492  144272 kic.go:199] duration metric: took 5.306942 seconds to extract preloaded images to volume
	W0911 11:09:50.617637  144272 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:09:50.617745  144272 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:09:50.667503  144272 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-387581 --name addons-387581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-387581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-387581 --network addons-387581 --ip 192.168.49.2 --volume addons-387581:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:09:50.956728  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Running}}
	I0911 11:09:50.974168  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:09:50.993517  144272 cli_runner.go:164] Run: docker exec addons-387581 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:09:51.032090  144272 oci.go:144] the created container "addons-387581" has a running status.
	I0911 11:09:51.032132  144272 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa...
	I0911 11:09:51.387304  144272 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:09:51.413684  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:09:51.431557  144272 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:09:51.431589  144272 kic_runner.go:114] Args: [docker exec --privileged addons-387581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:09:51.514328  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:09:51.535164  144272 machine.go:88] provisioning docker machine ...
	I0911 11:09:51.535202  144272 ubuntu.go:169] provisioning hostname "addons-387581"
	I0911 11:09:51.535264  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:51.555360  144272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:09:51.555760  144272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0911 11:09:51.555775  144272 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-387581 && echo "addons-387581" | sudo tee /etc/hostname
	I0911 11:09:51.697319  144272 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-387581
	
	I0911 11:09:51.697403  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:51.717636  144272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:09:51.718219  144272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0911 11:09:51.718240  144272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-387581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-387581/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-387581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:09:51.846021  144272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:09:51.846053  144272 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:09:51.846075  144272 ubuntu.go:177] setting up certificates
	I0911 11:09:51.846104  144272 provision.go:83] configureAuth start
	I0911 11:09:51.846155  144272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-387581
	I0911 11:09:51.862103  144272 provision.go:138] copyHostCerts
	I0911 11:09:51.862192  144272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:09:51.862320  144272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:09:51.862394  144272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:09:51.862456  144272 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.addons-387581 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-387581]
	I0911 11:09:52.187473  144272 provision.go:172] copyRemoteCerts
	I0911 11:09:52.187529  144272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:09:52.187572  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.203820  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:09:52.294264  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:09:52.315172  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 11:09:52.335894  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:09:52.356822  144272 provision.go:86] duration metric: configureAuth took 510.704747ms
	I0911 11:09:52.356849  144272 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:09:52.357006  144272 config.go:182] Loaded profile config "addons-387581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:09:52.357102  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.372839  144272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:09:52.373238  144272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0911 11:09:52.373257  144272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:09:52.585133  144272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:09:52.585164  144272 machine.go:91] provisioned docker machine in 1.049975899s
	I0911 11:09:52.585175  144272 client.go:171] LocalClient.Create took 10.608007584s
	I0911 11:09:52.585204  144272 start.go:167] duration metric: libmachine.API.Create for "addons-387581" took 10.608068818s
	I0911 11:09:52.585213  144272 start.go:300] post-start starting for "addons-387581" (driver="docker")
	I0911 11:09:52.585224  144272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:09:52.585302  144272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:09:52.585350  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.603454  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:09:52.694656  144272 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:09:52.697705  144272 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:09:52.697749  144272 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:09:52.697764  144272 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:09:52.697774  144272 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:09:52.697786  144272 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:09:52.697850  144272 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:09:52.697877  144272 start.go:303] post-start completed in 112.658788ms
	I0911 11:09:52.698251  144272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-387581
	I0911 11:09:52.714391  144272 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/config.json ...
	I0911 11:09:52.714630  144272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:09:52.714670  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.731146  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:09:52.818803  144272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:09:52.823701  144272 start.go:128] duration metric: createHost completed in 10.850186631s
	I0911 11:09:52.823737  144272 start.go:83] releasing machines lock for "addons-387581", held for 10.850408879s
	I0911 11:09:52.823818  144272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-387581
	I0911 11:09:52.839706  144272 ssh_runner.go:195] Run: cat /version.json
	I0911 11:09:52.839753  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.839855  144272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:09:52.839935  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:09:52.855833  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:09:52.856962  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:09:53.025396  144272 ssh_runner.go:195] Run: systemctl --version
	I0911 11:09:53.029522  144272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:09:53.164274  144272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:09:53.168237  144272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:09:53.185062  144272 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:09:53.185140  144272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:09:53.211180  144272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:09:53.211206  144272 start.go:466] detecting cgroup driver to use...
	I0911 11:09:53.211254  144272 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:09:53.211294  144272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:09:53.224879  144272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:09:53.235051  144272 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:09:53.235112  144272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:09:53.247093  144272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:09:53.259546  144272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:09:53.331546  144272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:09:53.407332  144272 docker.go:212] disabling docker service ...
	I0911 11:09:53.407384  144272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:09:53.424440  144272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:09:53.434495  144272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:09:53.506365  144272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:09:53.584685  144272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:09:53.594680  144272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:09:53.609057  144272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:09:53.609137  144272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:09:53.618226  144272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:09:53.618308  144272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:09:53.627017  144272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:09:53.635437  144272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:09:53.643986  144272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:09:53.651850  144272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:09:53.659113  144272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:09:53.666186  144272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:09:53.738079  144272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:09:53.826903  144272 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:09:53.826981  144272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:09:53.830186  144272 start.go:534] Will wait 60s for crictl version
	I0911 11:09:53.830236  144272 ssh_runner.go:195] Run: which crictl
	I0911 11:09:53.833098  144272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:09:53.865731  144272 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:09:53.865832  144272 ssh_runner.go:195] Run: crio --version
	I0911 11:09:53.898561  144272 ssh_runner.go:195] Run: crio --version
	I0911 11:09:53.932743  144272 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:09:53.934351  144272 cli_runner.go:164] Run: docker network inspect addons-387581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:09:53.951852  144272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0911 11:09:53.955368  144272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:09:53.965111  144272 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:09:53.965162  144272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:09:54.015330  144272 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:09:54.015352  144272 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:09:54.015399  144272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:09:54.046930  144272 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:09:54.046948  144272 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:09:54.047000  144272 ssh_runner.go:195] Run: crio config
	I0911 11:09:54.086875  144272 cni.go:84] Creating CNI manager for ""
	I0911 11:09:54.086899  144272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:09:54.086919  144272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:09:54.086944  144272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-387581 NodeName:addons-387581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:09:54.087074  144272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-387581"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:09:54.087156  144272 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-387581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-387581 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:09:54.087221  144272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:09:54.095183  144272 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:09:54.095238  144272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:09:54.102667  144272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0911 11:09:54.117801  144272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:09:54.133034  144272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0911 11:09:54.148189  144272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:09:54.151394  144272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:09:54.160717  144272 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581 for IP: 192.168.49.2
	I0911 11:09:54.160745  144272 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.160872  144272 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:09:54.259857  144272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt ...
	I0911 11:09:54.259884  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt: {Name:mk8c851f38ffc3d401691671c3a122cc6488005e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.260068  144272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key ...
	I0911 11:09:54.260086  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key: {Name:mk8ce7f687e466128485009db998246bae4a38ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.260185  144272 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:09:54.427548  144272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt ...
	I0911 11:09:54.427577  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt: {Name:mk178ccea6adf443330394a94770a7d64711f344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.427757  144272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key ...
	I0911 11:09:54.427774  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key: {Name:mkd735ad65d9468bcca7629c818ac0f0ea0ac0ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.427902  144272 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.key
	I0911 11:09:54.427920  144272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt with IP's: []
	I0911 11:09:54.547640  144272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt ...
	I0911 11:09:54.547668  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: {Name:mkb799c663fa0304fecc4128551cb243543e3ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.547843  144272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.key ...
	I0911 11:09:54.547858  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.key: {Name:mkdc38d1da3f27831ba2b24d2cf23faa6fed4588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.547941  144272 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key.dd3b5fb2
	I0911 11:09:54.547962  144272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:09:54.754222  144272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt.dd3b5fb2 ...
	I0911 11:09:54.754258  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt.dd3b5fb2: {Name:mk627ef711fb83801213466032de1185a99d8cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.754471  144272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key.dd3b5fb2 ...
	I0911 11:09:54.754490  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key.dd3b5fb2: {Name:mk36dd7ed32666901fe90b3fe58effe95e1dfbd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.754606  144272 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt
	I0911 11:09:54.754735  144272 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key
	I0911 11:09:54.754816  144272 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.key
	I0911 11:09:54.754843  144272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.crt with IP's: []
	I0911 11:09:54.918410  144272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.crt ...
	I0911 11:09:54.918442  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.crt: {Name:mkb409c49929a1610f61f1a15e5b267c19908ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.918640  144272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.key ...
	I0911 11:09:54.918655  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.key: {Name:mk703a0099d019963978705216910dfa4f4dd401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:54.918854  144272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:09:54.918896  144272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:09:54.918921  144272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:09:54.918952  144272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:09:54.919473  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:09:54.940230  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:09:54.960328  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:09:54.980345  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:09:55.001042  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:09:55.022442  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:09:55.043239  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:09:55.063892  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:09:55.084344  144272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:09:55.104654  144272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:09:55.119978  144272 ssh_runner.go:195] Run: openssl version
	I0911 11:09:55.125228  144272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:09:55.133626  144272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:09:55.136620  144272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:09:55.136669  144272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:09:55.142601  144272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:09:55.150426  144272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:09:55.153197  144272 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:09:55.153240  144272 kubeadm.go:404] StartCluster: {Name:addons-387581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-387581 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:09:55.153309  144272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:09:55.153340  144272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:09:55.185157  144272 cri.go:89] found id: ""
	I0911 11:09:55.185215  144272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:09:55.193222  144272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:09:55.200884  144272 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:09:55.200937  144272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:09:55.208509  144272 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:09:55.208574  144272 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:09:55.250844  144272 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:09:55.250912  144272 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:09:55.284957  144272 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:09:55.285049  144272 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:09:55.285086  144272 kubeadm.go:322] OS: Linux
	I0911 11:09:55.285142  144272 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:09:55.285213  144272 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:09:55.285269  144272 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:09:55.285337  144272 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:09:55.285421  144272 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:09:55.285501  144272 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:09:55.285556  144272 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:09:55.285633  144272 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:09:55.285712  144272 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:09:55.345582  144272 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:09:55.345719  144272 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:09:55.345829  144272 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:09:55.531859  144272 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:09:55.535847  144272 out.go:204]   - Generating certificates and keys ...
	I0911 11:09:55.536003  144272 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:09:55.536124  144272 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:09:55.744927  144272 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:09:55.808655  144272 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:09:55.885878  144272 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:09:56.285008  144272 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:09:56.505823  144272 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:09:56.505939  144272 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-387581 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0911 11:09:56.615100  144272 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:09:56.615271  144272 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-387581 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0911 11:09:56.707382  144272 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:09:56.873265  144272 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:09:57.018929  144272 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:09:57.019039  144272 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:09:57.120903  144272 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:09:57.353359  144272 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:09:57.477883  144272 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:09:57.574123  144272 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:09:57.575162  144272 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:09:57.577495  144272 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:09:57.580085  144272 out.go:204]   - Booting up control plane ...
	I0911 11:09:57.580253  144272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:09:57.580372  144272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:09:57.580456  144272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:09:57.588159  144272 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:09:57.588870  144272 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:09:57.588942  144272 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:09:57.671613  144272 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:10:02.674317  144272 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002707 seconds
	I0911 11:10:02.674494  144272 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:10:02.687158  144272 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:10:03.210128  144272 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:10:03.210372  144272 kubeadm.go:322] [mark-control-plane] Marking the node addons-387581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:10:03.719434  144272 kubeadm.go:322] [bootstrap-token] Using token: apfhwz.hskbzfzojvhioe9a
	I0911 11:10:03.721010  144272 out.go:204]   - Configuring RBAC rules ...
	I0911 11:10:03.721152  144272 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:10:03.724625  144272 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:10:03.730332  144272 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:10:03.734248  144272 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:10:03.736928  144272 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:10:03.739498  144272 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:10:03.749622  144272 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:10:03.943750  144272 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:10:04.162268  144272 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:10:04.163380  144272 kubeadm.go:322] 
	I0911 11:10:04.163460  144272 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:10:04.163470  144272 kubeadm.go:322] 
	I0911 11:10:04.163533  144272 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:10:04.163537  144272 kubeadm.go:322] 
	I0911 11:10:04.163557  144272 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:10:04.163600  144272 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:10:04.163638  144272 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:10:04.163641  144272 kubeadm.go:322] 
	I0911 11:10:04.163681  144272 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 11:10:04.163684  144272 kubeadm.go:322] 
	I0911 11:10:04.163721  144272 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:10:04.163725  144272 kubeadm.go:322] 
	I0911 11:10:04.163763  144272 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:10:04.163818  144272 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:10:04.163885  144272 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:10:04.163889  144272 kubeadm.go:322] 
	I0911 11:10:04.163972  144272 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:10:04.164029  144272 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:10:04.164033  144272 kubeadm.go:322] 
	I0911 11:10:04.164094  144272 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token apfhwz.hskbzfzojvhioe9a \
	I0911 11:10:04.164170  144272 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:10:04.164186  144272 kubeadm.go:322] 	--control-plane 
	I0911 11:10:04.164193  144272 kubeadm.go:322] 
	I0911 11:10:04.164260  144272 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:10:04.164264  144272 kubeadm.go:322] 
	I0911 11:10:04.164324  144272 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token apfhwz.hskbzfzojvhioe9a \
	I0911 11:10:04.164399  144272 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:10:04.166217  144272 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:10:04.166346  144272 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:10:04.166372  144272 cni.go:84] Creating CNI manager for ""
	I0911 11:10:04.166385  144272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:10:04.168319  144272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:10:04.169650  144272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:10:04.173192  144272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:10:04.173215  144272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:10:04.189223  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:10:04.800420  144272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:10:04.800535  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=addons-387581 minikube.k8s.io/updated_at=2023_09_11T11_10_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:04.800532  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:04.807296  144272 ops.go:34] apiserver oom_adj: -16
	I0911 11:10:04.906274  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:04.999446  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:05.561709  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:06.061707  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:06.561263  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:07.061148  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:07.561744  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:08.061934  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:08.561838  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:09.061559  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:09.561535  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:10.061907  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:10.561934  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:11.061236  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:11.561324  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:12.061949  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:12.561912  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:13.061863  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:13.561856  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:14.061466  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:14.561175  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:15.061750  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:15.561349  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:16.062110  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:16.561795  144272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:16.673136  144272 kubeadm.go:1081] duration metric: took 11.872674111s to wait for elevateKubeSystemPrivileges.
	I0911 11:10:16.673176  144272 kubeadm.go:406] StartCluster complete in 21.519938769s
	I0911 11:10:16.673203  144272 settings.go:142] acquiring lock: {Name:mk01327a907b1ed5b7990abeca4c89109d2bed5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:16.673329  144272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:10:16.673849  144272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/kubeconfig: {Name:mk3da3a5a3a5d35dd9d56a597907266732eec114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:16.674133  144272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:10:16.674141  144272 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0911 11:10:16.674271  144272 addons.go:69] Setting volumesnapshots=true in profile "addons-387581"
	I0911 11:10:16.674301  144272 addons.go:69] Setting default-storageclass=true in profile "addons-387581"
	I0911 11:10:16.674315  144272 addons.go:231] Setting addon volumesnapshots=true in "addons-387581"
	I0911 11:10:16.674324  144272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-387581"
	I0911 11:10:16.674347  144272 config.go:182] Loaded profile config "addons-387581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:10:16.674373  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.674392  144272 addons.go:69] Setting metrics-server=true in profile "addons-387581"
	I0911 11:10:16.674405  144272 addons.go:231] Setting addon metrics-server=true in "addons-387581"
	I0911 11:10:16.674445  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.674702  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.674839  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.674858  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.674879  144272 addons.go:69] Setting gcp-auth=true in profile "addons-387581"
	I0911 11:10:16.674904  144272 mustload.go:65] Loading cluster: addons-387581
	I0911 11:10:16.674282  144272 addons.go:69] Setting ingress=true in profile "addons-387581"
	I0911 11:10:16.674923  144272 addons.go:69] Setting helm-tiller=true in profile "addons-387581"
	I0911 11:10:16.674951  144272 addons.go:69] Setting registry=true in profile "addons-387581"
	I0911 11:10:16.674964  144272 addons.go:69] Setting inspektor-gadget=true in profile "addons-387581"
	I0911 11:10:16.674939  144272 addons.go:231] Setting addon ingress=true in "addons-387581"
	I0911 11:10:16.674977  144272 addons.go:231] Setting addon inspektor-gadget=true in "addons-387581"
	I0911 11:10:16.674977  144272 addons.go:69] Setting storage-provisioner=true in profile "addons-387581"
	I0911 11:10:16.674990  144272 addons.go:231] Setting addon storage-provisioner=true in "addons-387581"
	I0911 11:10:16.675011  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.675019  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.675041  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.675103  144272 config.go:182] Loaded profile config "addons-387581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:10:16.675351  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.675401  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.675439  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.675440  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.674297  144272 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-387581"
	I0911 11:10:16.675506  144272 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-387581"
	I0911 11:10:16.674291  144272 addons.go:69] Setting cloud-spanner=true in profile "addons-387581"
	I0911 11:10:16.675541  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.675544  144272 addons.go:231] Setting addon cloud-spanner=true in "addons-387581"
	I0911 11:10:16.674965  144272 addons.go:231] Setting addon helm-tiller=true in "addons-387581"
	I0911 11:10:16.675598  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.675648  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.674967  144272 addons.go:231] Setting addon registry=true in "addons-387581"
	I0911 11:10:16.675779  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.676011  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.676128  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.676172  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.675937  144272 addons.go:69] Setting ingress-dns=true in profile "addons-387581"
	I0911 11:10:16.676632  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.677252  144272 addons.go:231] Setting addon ingress-dns=true in "addons-387581"
	I0911 11:10:16.677382  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.677924  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.708326  144272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:10:16.710375  144272 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:10:16.710408  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:10:16.710470  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.711660  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.721083  144272 addons.go:231] Setting addon default-storageclass=true in "addons-387581"
	I0911 11:10:16.722888  144272 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0911 11:10:16.721131  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:16.725850  144272 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0911 11:10:16.727641  144272 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 11:10:16.727679  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 11:10:16.727724  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.724397  144272 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0911 11:10:16.727837  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0911 11:10:16.724837  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:16.727865  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.734985  144272 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0911 11:10:16.736568  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0911 11:10:16.738239  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0911 11:10:16.738260  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0911 11:10:16.738320  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.736542  144272 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 11:10:16.740159  144272 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0911 11:10:16.741671  144272 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0911 11:10:16.741685  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0911 11:10:16.741727  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.740143  144272 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 11:10:16.743671  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0911 11:10:16.744252  144272 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 11:10:16.748803  144272 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0911 11:10:16.747105  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0911 11:10:16.747124  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0911 11:10:16.750356  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.750573  144272 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 11:10:16.750586  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0911 11:10:16.750623  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.753505  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0911 11:10:16.755591  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0911 11:10:16.757241  144272 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0911 11:10:16.758850  144272 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0911 11:10:16.758874  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0911 11:10:16.758933  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.760951  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0911 11:10:16.760077  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.760451  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.765405  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0911 11:10:16.767978  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0911 11:10:16.768937  144272 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:10:16.770196  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:10:16.770264  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.770449  144272 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0911 11:10:16.772051  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0911 11:10:16.772069  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0911 11:10:16.772127  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.782264  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.784510  144272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-387581" context rescaled to 1 replicas
	I0911 11:10:16.784554  144272 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:10:16.791839  144272 out.go:177] * Verifying Kubernetes components...
	I0911 11:10:16.795211  144272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:10:16.794213  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.798081  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.811623  144272 out.go:177]   - Using image docker.io/registry:2.8.1
	I0911 11:10:16.813392  144272 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0911 11:10:16.815066  144272 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0911 11:10:16.815090  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0911 11:10:16.815148  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:16.815814  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.819444  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.820208  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.821131  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.821584  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.835977  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:16.976958  144272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 11:10:16.977973  144272 node_ready.go:35] waiting up to 6m0s for node "addons-387581" to be "Ready" ...
	I0911 11:10:17.065897  144272 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 11:10:17.065926  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0911 11:10:17.082320  144272 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0911 11:10:17.082356  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0911 11:10:17.088272  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:10:17.174295  144272 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0911 11:10:17.174329  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0911 11:10:17.179809  144272 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 11:10:17.179832  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 11:10:17.268247  144272 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0911 11:10:17.268275  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0911 11:10:17.271989  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 11:10:17.272751  144272 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0911 11:10:17.272813  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0911 11:10:17.275625  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0911 11:10:17.279510  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 11:10:17.282360  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:10:17.360068  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0911 11:10:17.360125  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0911 11:10:17.370621  144272 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 11:10:17.370659  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 11:10:17.376625  144272 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0911 11:10:17.376650  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0911 11:10:17.378834  144272 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0911 11:10:17.378859  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0911 11:10:17.460810  144272 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0911 11:10:17.460847  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0911 11:10:17.559811  144272 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 11:10:17.559842  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0911 11:10:17.566750  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0911 11:10:17.566777  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0911 11:10:17.576102  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0911 11:10:17.576134  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0911 11:10:17.660441  144272 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0911 11:10:17.660473  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0911 11:10:17.663435  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 11:10:17.668102  144272 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0911 11:10:17.668130  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0911 11:10:17.782254  144272 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 11:10:17.782282  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0911 11:10:17.870621  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0911 11:10:17.875059  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0911 11:10:17.875086  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0911 11:10:17.876443  144272 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0911 11:10:17.876466  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0911 11:10:17.880093  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 11:10:18.172055  144272 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0911 11:10:18.172085  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0911 11:10:18.174383  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0911 11:10:18.174412  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0911 11:10:18.180979  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 11:10:18.459680  144272 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0911 11:10:18.459712  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0911 11:10:18.860439  144272 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 11:10:18.860474  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0911 11:10:18.866163  144272 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0911 11:10:18.866205  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0911 11:10:19.077761  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 11:10:19.176289  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0911 11:10:19.176332  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0911 11:10:19.264087  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:19.269482  144272 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.292480687s)
	I0911 11:10:19.269563  144272 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0911 11:10:19.560172  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0911 11:10:19.560250  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0911 11:10:19.770310  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0911 11:10:19.770401  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0911 11:10:20.065737  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0911 11:10:20.065842  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0911 11:10:20.259047  144272 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 11:10:20.259139  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0911 11:10:20.371386  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 11:10:21.367220  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.278907355s)
	I0911 11:10:21.665169  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:22.558771  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.286743364s)
	I0911 11:10:22.558814  144272 addons.go:467] Verifying addon ingress=true in "addons-387581"
	I0911 11:10:22.562391  144272 out.go:177] * Verifying ingress addon...
	I0911 11:10:22.558905  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.283208439s)
	I0911 11:10:22.558978  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.279388577s)
	I0911 11:10:22.559010  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.27662242s)
	I0911 11:10:22.559118  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.895651426s)
	I0911 11:10:22.562513  144272 addons.go:467] Verifying addon metrics-server=true in "addons-387581"
	I0911 11:10:22.559203  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.688541819s)
	I0911 11:10:22.562539  144272 addons.go:467] Verifying addon registry=true in "addons-387581"
	I0911 11:10:22.559261  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.679133663s)
	I0911 11:10:22.559373  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.378357717s)
	I0911 11:10:22.564676  144272 out.go:177] * Verifying registry addon...
	W0911 11:10:22.562630  144272 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 11:10:22.559437  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.481638797s)
	I0911 11:10:22.566323  144272 retry.go:31] will retry after 264.699649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 11:10:22.566955  144272 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0911 11:10:22.568627  144272 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0911 11:10:22.573170  144272 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0911 11:10:22.573195  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:22.573295  144272 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0911 11:10:22.573312  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:22.576299  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:22.576410  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:22.832877  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 11:10:23.080634  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:23.080870  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:23.289739  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.918290908s)
	I0911 11:10:23.289872  144272 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-387581"
	I0911 11:10:23.292121  144272 out.go:177] * Verifying csi-hostpath-driver addon...
	I0911 11:10:23.297427  144272 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0911 11:10:23.301006  144272 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0911 11:10:23.301025  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:23.305590  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:23.519696  144272 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0911 11:10:23.519758  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:23.538448  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:23.581174  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:23.581448  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:23.639177  144272 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0911 11:10:23.672891  144272 addons.go:231] Setting addon gcp-auth=true in "addons-387581"
	I0911 11:10:23.672953  144272 host.go:66] Checking if "addons-387581" exists ...
	I0911 11:10:23.673449  144272 cli_runner.go:164] Run: docker container inspect addons-387581 --format={{.State.Status}}
	I0911 11:10:23.701351  144272 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0911 11:10:23.701401  144272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-387581
	I0911 11:10:23.718305  144272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/addons-387581/id_rsa Username:docker}
	I0911 11:10:23.809967  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:23.836374  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.003451583s)
	I0911 11:10:23.847608  144272 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0911 11:10:23.849475  144272 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 11:10:23.851315  144272 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0911 11:10:23.851334  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0911 11:10:23.868584  144272 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0911 11:10:23.868626  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0911 11:10:23.884469  144272 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 11:10:23.884495  144272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0911 11:10:23.900283  144272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 11:10:24.071106  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:24.080935  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:24.081384  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:24.365556  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:24.662278  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:24.663159  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:24.862773  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:25.081219  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:25.081251  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:25.366515  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:25.379229  144272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.478902988s)
	I0911 11:10:25.380755  144272 addons.go:467] Verifying addon gcp-auth=true in "addons-387581"
	I0911 11:10:25.461053  144272 out.go:177] * Verifying gcp-auth addon...
	I0911 11:10:25.464070  144272 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0911 11:10:25.479201  144272 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0911 11:10:25.479225  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:25.488167  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:25.660738  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:25.661019  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:25.862849  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:26.068900  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:26.072025  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:26.082289  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:26.082783  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:26.361629  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:26.561351  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:26.580754  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:26.581246  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:26.862708  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:27.060938  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:27.082514  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:27.083196  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:27.360554  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:27.559753  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:27.580729  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:27.581346  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:27.864035  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:28.062165  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:28.072498  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:28.082258  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:28.083698  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:28.360591  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:28.561055  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:28.582321  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:28.584026  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:28.810832  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:28.991888  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:29.080634  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:29.080840  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:29.310424  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:29.492773  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:29.580868  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:29.581131  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:29.809862  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:29.992120  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:30.080709  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:30.081072  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:30.309994  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:30.492146  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:30.571933  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:30.580769  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:30.581194  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:30.810194  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:30.992297  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:31.080525  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:31.080693  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:31.311562  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:31.492183  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:31.579773  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:31.580116  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:31.810238  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:31.992390  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:32.080326  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:32.080505  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:32.311610  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:32.492511  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:32.572014  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:32.580641  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:32.580831  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:32.810370  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:32.992650  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:33.080816  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:33.081010  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:33.309473  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:33.491293  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:33.580375  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:33.580722  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:33.810415  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:33.992340  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:34.080589  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:34.080812  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:34.310073  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:34.491782  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:34.580424  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:34.580570  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:34.810038  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:34.991755  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:35.071144  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:35.081634  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:35.082524  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:35.309665  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:35.491216  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:35.579632  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:35.580283  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:35.810384  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:35.992027  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:36.079851  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:36.080042  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:36.309479  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:36.492026  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:36.580539  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:36.580773  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:36.809437  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:36.992044  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:37.071567  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:37.080433  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:37.080959  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:37.310206  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:37.491872  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:37.580648  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:37.581202  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:37.810341  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:37.992095  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:38.079687  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:38.079875  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:38.309215  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:38.492025  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:38.580587  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:38.581019  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:38.809605  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:38.991433  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:39.071944  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:39.079891  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:39.080218  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:39.309794  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:39.491431  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:39.579926  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:39.580181  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:39.809688  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:39.991378  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:40.080386  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:40.080707  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:40.310260  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:40.491896  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:40.580690  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:40.580833  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:40.809198  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:40.991844  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:41.082190  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:41.082450  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:41.309930  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:41.491652  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:41.571178  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:41.580108  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:41.580318  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:41.809851  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:41.991719  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:42.080508  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:42.080679  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:42.310656  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:42.491463  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:42.579805  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:42.580097  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:42.809734  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:42.991797  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:43.080239  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:43.080479  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:43.310043  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:43.491483  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:43.572370  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:43.580314  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:43.580599  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:43.809965  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:43.991741  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:44.080184  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:44.080285  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:44.309876  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:44.491342  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:44.580185  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:44.580452  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:44.809733  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:44.991460  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:45.080160  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:45.080489  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:45.309740  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:45.491383  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:45.579994  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:45.580192  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:45.809720  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:45.991563  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:46.071069  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:46.080583  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:46.080800  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:46.310193  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:46.492023  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:46.580922  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:46.580942  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:46.809671  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:46.991472  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:47.080589  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:47.080791  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:47.310893  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:47.491475  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:47.580207  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:47.580430  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:47.809812  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:47.991269  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:48.071983  144272 node_ready.go:58] node "addons-387581" has status "Ready":"False"
	I0911 11:10:48.079846  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:48.080059  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:48.309512  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:48.492268  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:48.579775  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:48.580060  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:48.809721  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:48.993239  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:49.080197  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:49.080409  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:49.309839  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:49.491856  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:49.580428  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:49.580598  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:49.810130  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:49.991878  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:50.080699  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:50.080960  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:50.309498  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:50.491205  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:50.571734  144272 node_ready.go:49] node "addons-387581" has status "Ready":"True"
	I0911 11:10:50.571814  144272 node_ready.go:38] duration metric: took 33.593809218s waiting for node "addons-387581" to be "Ready" ...
	I0911 11:10:50.571829  144272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:10:50.579345  144272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5fcnk" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:50.581557  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:50.581911  144272 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0911 11:10:50.581928  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:50.812095  144272 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0911 11:10:50.812125  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:50.992009  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:51.082281  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:51.082933  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:51.311777  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:51.491608  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:51.580332  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:51.580547  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:51.810553  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:51.991867  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:52.080466  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:52.080770  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:52.363945  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:52.561887  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:52.584023  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:52.584390  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:52.665667  144272 pod_ready.go:92] pod "coredns-5dd5756b68-5fcnk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:52.665700  144272 pod_ready.go:81] duration metric: took 2.086320719s waiting for pod "coredns-5dd5756b68-5fcnk" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.665730  144272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.671882  144272 pod_ready.go:92] pod "etcd-addons-387581" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:52.671909  144272 pod_ready.go:81] duration metric: took 6.170958ms waiting for pod "etcd-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.671926  144272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.679158  144272 pod_ready.go:92] pod "kube-apiserver-addons-387581" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:52.679178  144272 pod_ready.go:81] duration metric: took 7.244777ms waiting for pod "kube-apiserver-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.679194  144272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.684745  144272 pod_ready.go:92] pod "kube-controller-manager-addons-387581" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:52.684776  144272 pod_ready.go:81] duration metric: took 5.57262ms waiting for pod "kube-controller-manager-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.684795  144272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkffp" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.863646  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:52.972181  144272 pod_ready.go:92] pod "kube-proxy-bkffp" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:52.972209  144272 pod_ready.go:81] duration metric: took 287.406852ms waiting for pod "kube-proxy-bkffp" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.972219  144272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:52.992426  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:53.081351  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:53.081487  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:53.310469  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:53.371704  144272 pod_ready.go:92] pod "kube-scheduler-addons-387581" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:53.371732  144272 pod_ready.go:81] duration metric: took 399.505781ms waiting for pod "kube-scheduler-addons-387581" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:53.371746  144272 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-hh99k" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:53.491920  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:53.580309  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:53.580420  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:53.810967  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:53.992782  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:54.080612  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:54.080712  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:54.371693  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:54.563526  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:54.581026  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:54.581211  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:54.811206  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:54.992412  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:55.081123  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:55.081306  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:55.310638  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:55.492434  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:55.582403  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:55.582954  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:55.678292  144272 pod_ready.go:102] pod "metrics-server-7c66d45ddc-hh99k" in "kube-system" namespace has status "Ready":"False"
	I0911 11:10:55.860140  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:55.991567  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:56.082898  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:56.083060  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:56.310339  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:56.492091  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:56.580864  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:56.581060  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:56.810878  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:56.992018  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:57.080748  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:57.080757  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:57.313901  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:57.491928  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:57.580710  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:57.580974  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:57.811247  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:57.991734  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:58.080555  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:58.080703  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:58.177652  144272 pod_ready.go:102] pod "metrics-server-7c66d45ddc-hh99k" in "kube-system" namespace has status "Ready":"False"
	I0911 11:10:58.311100  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:58.491380  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:58.580964  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:58.581039  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:58.810583  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:58.991552  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:59.082019  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:59.083587  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:59.361837  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:59.563740  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:10:59.583494  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:10:59.584228  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:10:59.684929  144272 pod_ready.go:92] pod "metrics-server-7c66d45ddc-hh99k" in "kube-system" namespace has status "Ready":"True"
	I0911 11:10:59.684966  144272 pod_ready.go:81] duration metric: took 6.313210291s waiting for pod "metrics-server-7c66d45ddc-hh99k" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:59.684992  144272 pod_ready.go:38] duration metric: took 9.113148647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:10:59.685013  144272 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:10:59.685084  144272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:10:59.773305  144272 api_server.go:72] duration metric: took 42.988715591s to wait for apiserver process to appear ...
	I0911 11:10:59.773333  144272 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:10:59.773355  144272 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0911 11:10:59.779866  144272 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0911 11:10:59.781255  144272 api_server.go:141] control plane version: v1.28.1
	I0911 11:10:59.781279  144272 api_server.go:131] duration metric: took 7.938175ms to wait for apiserver health ...
	I0911 11:10:59.781289  144272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:10:59.864586  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:10:59.864968  144272 system_pods.go:59] 18 kube-system pods found
	I0911 11:10:59.864997  144272 system_pods.go:61] "coredns-5dd5756b68-5fcnk" [2e6f819d-4310-44bd-87f9-daf6b62dd82e] Running
	I0911 11:10:59.865006  144272 system_pods.go:61] "csi-hostpath-attacher-0" [01550a8d-65cc-4d89-a31d-5cc95f6517db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 11:10:59.865015  144272 system_pods.go:61] "csi-hostpath-resizer-0" [ad42dc60-e008-4056-8085-6d9b64b801ce] Running
	I0911 11:10:59.865030  144272 system_pods.go:61] "csi-hostpathplugin-bkkx4" [c7d9f7b7-3a19-4d39-afbf-5acfff46bdf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 11:10:59.865044  144272 system_pods.go:61] "etcd-addons-387581" [d23bcb94-5072-45d1-acd3-82f4afe0881f] Running
	I0911 11:10:59.865052  144272 system_pods.go:61] "kindnet-kpzws" [f700cd2b-e490-49b7-b801-ec20ddb77579] Running
	I0911 11:10:59.865059  144272 system_pods.go:61] "kube-apiserver-addons-387581" [8f530e50-68f4-4112-b8b4-358ab894cb46] Running
	I0911 11:10:59.865070  144272 system_pods.go:61] "kube-controller-manager-addons-387581" [309f4832-05e4-4e82-bf4b-f1c914c8eb10] Running
	I0911 11:10:59.865082  144272 system_pods.go:61] "kube-ingress-dns-minikube" [a7772be0-47af-495f-a091-8eb993378efa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 11:10:59.865093  144272 system_pods.go:61] "kube-proxy-bkffp" [3f0baaa1-1748-4c1b-9574-fed45521b8ff] Running
	I0911 11:10:59.865107  144272 system_pods.go:61] "kube-scheduler-addons-387581" [58daa389-69a3-4564-ba23-4d8fc82b26f0] Running
	I0911 11:10:59.865118  144272 system_pods.go:61] "metrics-server-7c66d45ddc-hh99k" [438d5c8d-3fb1-4282-aea2-d898eb14cde8] Running
	I0911 11:10:59.865133  144272 system_pods.go:61] "registry-lsf9c" [229a155a-01b6-4c49-9097-38bc0f421cc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 11:10:59.865143  144272 system_pods.go:61] "registry-proxy-n62vc" [81a8d184-03d9-4971-b616-c8e87daf001f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 11:10:59.865157  144272 system_pods.go:61] "snapshot-controller-58dbcc7b99-4zf8v" [b924bbe5-fd91-4643-9b0f-517a26dfa38f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 11:10:59.865170  144272 system_pods.go:61] "snapshot-controller-58dbcc7b99-d42mx" [13f0bbb1-695f-46f2-bc7e-be55f1a37c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 11:10:59.865182  144272 system_pods.go:61] "storage-provisioner" [29a6c1ac-3fb8-4428-a54e-40cfede12715] Running
	I0911 11:10:59.865192  144272 system_pods.go:61] "tiller-deploy-7b677967b9-sh92l" [4c9748e0-81a5-477c-a57a-a5e7eb91d2f5] Running
	I0911 11:10:59.865200  144272 system_pods.go:74] duration metric: took 83.903843ms to wait for pod list to return data ...
	I0911 11:10:59.865214  144272 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:10:59.867576  144272 default_sa.go:45] found service account: "default"
	I0911 11:10:59.867599  144272 default_sa.go:55] duration metric: took 2.377324ms for default service account to be created ...
	I0911 11:10:59.867609  144272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:10:59.877513  144272 system_pods.go:86] 18 kube-system pods found
	I0911 11:10:59.877544  144272 system_pods.go:89] "coredns-5dd5756b68-5fcnk" [2e6f819d-4310-44bd-87f9-daf6b62dd82e] Running
	I0911 11:10:59.877560  144272 system_pods.go:89] "csi-hostpath-attacher-0" [01550a8d-65cc-4d89-a31d-5cc95f6517db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 11:10:59.877570  144272 system_pods.go:89] "csi-hostpath-resizer-0" [ad42dc60-e008-4056-8085-6d9b64b801ce] Running
	I0911 11:10:59.877579  144272 system_pods.go:89] "csi-hostpathplugin-bkkx4" [c7d9f7b7-3a19-4d39-afbf-5acfff46bdf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 11:10:59.877587  144272 system_pods.go:89] "etcd-addons-387581" [d23bcb94-5072-45d1-acd3-82f4afe0881f] Running
	I0911 11:10:59.877594  144272 system_pods.go:89] "kindnet-kpzws" [f700cd2b-e490-49b7-b801-ec20ddb77579] Running
	I0911 11:10:59.877615  144272 system_pods.go:89] "kube-apiserver-addons-387581" [8f530e50-68f4-4112-b8b4-358ab894cb46] Running
	I0911 11:10:59.877623  144272 system_pods.go:89] "kube-controller-manager-addons-387581" [309f4832-05e4-4e82-bf4b-f1c914c8eb10] Running
	I0911 11:10:59.877639  144272 system_pods.go:89] "kube-ingress-dns-minikube" [a7772be0-47af-495f-a091-8eb993378efa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 11:10:59.877652  144272 system_pods.go:89] "kube-proxy-bkffp" [3f0baaa1-1748-4c1b-9574-fed45521b8ff] Running
	I0911 11:10:59.877659  144272 system_pods.go:89] "kube-scheduler-addons-387581" [58daa389-69a3-4564-ba23-4d8fc82b26f0] Running
	I0911 11:10:59.877669  144272 system_pods.go:89] "metrics-server-7c66d45ddc-hh99k" [438d5c8d-3fb1-4282-aea2-d898eb14cde8] Running
	I0911 11:10:59.877681  144272 system_pods.go:89] "registry-lsf9c" [229a155a-01b6-4c49-9097-38bc0f421cc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 11:10:59.877691  144272 system_pods.go:89] "registry-proxy-n62vc" [81a8d184-03d9-4971-b616-c8e87daf001f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 11:10:59.877703  144272 system_pods.go:89] "snapshot-controller-58dbcc7b99-4zf8v" [b924bbe5-fd91-4643-9b0f-517a26dfa38f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 11:10:59.877719  144272 system_pods.go:89] "snapshot-controller-58dbcc7b99-d42mx" [13f0bbb1-695f-46f2-bc7e-be55f1a37c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 11:10:59.877735  144272 system_pods.go:89] "storage-provisioner" [29a6c1ac-3fb8-4428-a54e-40cfede12715] Running
	I0911 11:10:59.877742  144272 system_pods.go:89] "tiller-deploy-7b677967b9-sh92l" [4c9748e0-81a5-477c-a57a-a5e7eb91d2f5] Running
	I0911 11:10:59.877789  144272 system_pods.go:126] duration metric: took 10.172983ms to wait for k8s-apps to be running ...
	I0911 11:10:59.877802  144272 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:10:59.877856  144272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:10:59.894156  144272 system_svc.go:56] duration metric: took 16.343473ms WaitForService to wait for kubelet.
	I0911 11:10:59.894190  144272 kubeadm.go:581] duration metric: took 43.109606254s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:10:59.894225  144272 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:10:59.959609  144272 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0911 11:10:59.959648  144272 node_conditions.go:123] node cpu capacity is 8
	I0911 11:10:59.959665  144272 node_conditions.go:105] duration metric: took 65.433863ms to run NodePressure ...
	I0911 11:10:59.959682  144272 start.go:228] waiting for startup goroutines ...
	I0911 11:10:59.992711  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:00.081153  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:00.082486  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:00.312744  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:00.492786  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:00.581340  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:00.581749  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:00.810882  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:00.992720  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:01.080875  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:01.080931  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:01.311329  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:01.491868  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:01.581185  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:01.581447  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:01.811564  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:01.992137  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:02.081432  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:02.082237  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:02.312733  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:02.492892  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:02.581302  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:02.581367  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:02.811516  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:02.992050  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:03.081360  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:03.081583  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:03.312666  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:03.492459  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:03.581049  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:03.581481  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:03.811129  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:03.992266  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:04.080962  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:04.081086  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:04.310122  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:04.491335  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:04.581167  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:04.581336  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:04.810620  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:04.991913  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:05.080276  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:05.080310  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:05.310896  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:05.492222  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:05.581484  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:05.582043  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:05.811575  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:05.992099  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:06.081003  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:06.081051  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:06.310866  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:06.491776  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:06.580617  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:06.580809  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:06.810736  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:06.992249  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:07.080956  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:07.081112  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:07.310882  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:07.494061  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:07.580713  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:07.581386  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:07.810524  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:07.991497  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:08.081046  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:08.081119  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:08.311249  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:08.492210  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:08.581283  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:08.581447  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:08.810956  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:08.992528  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:09.080841  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:09.081040  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:09.311694  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:09.491680  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:09.583256  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:09.583963  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:09.862744  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:09.991918  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:10.081306  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:10.081385  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:10.311049  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:10.492584  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:10.581358  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:10.581386  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:10.810866  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:10.991901  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:11.081352  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:11.081403  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:11.310812  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:11.492185  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:11.581156  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:11.581162  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:11.811203  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:11.992536  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:12.081857  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:12.082045  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:12.312767  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:12.491424  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:12.581814  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:12.582021  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:12.811276  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:12.991852  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:13.080647  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:13.080984  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:13.310988  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:13.492112  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:13.582397  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:13.582448  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:13.811132  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:13.992298  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:14.081360  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:14.081361  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:14.311847  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:14.492042  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:14.580620  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:14.581673  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:14.811966  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:14.992569  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:15.081306  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:15.081311  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:15.311119  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:15.492277  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:15.581061  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:15.581193  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:15.811763  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:15.991871  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:16.081320  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:16.081379  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:16.310994  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:16.491650  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:16.582005  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:16.582305  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:16.810977  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:16.992292  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:17.081249  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 11:11:17.081418  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:17.311463  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:17.491721  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:17.582070  144272 kapi.go:107] duration metric: took 55.015112585s to wait for kubernetes.io/minikube-addons=registry ...
	I0911 11:11:17.582328  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:17.810723  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:17.991785  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:18.080175  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:18.311460  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:18.562370  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:18.582670  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:18.864591  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:19.062433  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:19.082285  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:19.362138  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:19.563510  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:19.581950  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:19.863246  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:20.061353  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:20.082821  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:20.363386  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:20.492878  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:20.581815  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:20.811398  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:20.991218  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:21.081004  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:21.311849  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:21.492403  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:21.580704  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:21.811455  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:21.992097  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:22.081463  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:22.312543  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:22.491554  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:22.581512  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:22.811341  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:22.993001  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:23.080558  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:23.311306  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:23.491532  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:23.581109  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:23.811796  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:23.992224  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:24.080510  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:24.311501  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:24.559881  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:24.589273  144272 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 11:11:24.811478  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:24.991849  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:25.081620  144272 kapi.go:107] duration metric: took 1m2.512987494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0911 11:11:25.310494  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:25.492058  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:25.810973  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:25.992433  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:26.312263  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:26.491793  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:26.810796  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:26.992482  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:27.311486  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:27.492491  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 11:11:27.861286  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:27.991603  144272 kapi.go:107] duration metric: took 1m2.527528827s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0911 11:11:27.994624  144272 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-387581 cluster.
	I0911 11:11:27.996503  144272 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0911 11:11:27.998110  144272 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0911 11:11:28.311357  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:28.811035  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:29.311634  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:29.811088  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:30.310468  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:30.811218  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:31.311249  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:31.811214  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:32.311518  144272 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 11:11:32.810368  144272 kapi.go:107] duration metric: took 1m9.512941578s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0911 11:11:32.812552  144272 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, helm-tiller, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0911 11:11:32.814269  144272 addons.go:502] enable addons completed in 1m16.140126853s: enabled=[storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server helm-tiller inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0911 11:11:32.814317  144272 start.go:233] waiting for cluster config update ...
	I0911 11:11:32.814341  144272 start.go:242] writing updated cluster config ...
	I0911 11:11:32.814677  144272 ssh_runner.go:195] Run: rm -f paused
	I0911 11:11:32.864843  144272 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:11:32.876345  144272 out.go:177] * Done! kubectl is now configured to use "addons-387581" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.264927684Z" level=info msg="Removing container: e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76" id=8ed898b1-c846-462d-931d-b8f318c5e3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.282797283Z" level=info msg="Removed container e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=8ed898b1-c846-462d-931d-b8f318c5e3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.434502994Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb" id=528b99e4-c5af-465f-92e5-c8529f5b8f9e name=/runtime.v1.ImageService/PullImage
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.435361760Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8c79967a-b7db-466a-a183-142aca96d07d name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.436417772Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8c79967a-b7db-466a-a183-142aca96d07d name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.437277831Z" level=info msg="Creating container: default/hello-world-app-5d77478584-q58bk/hello-world-app" id=19f815b8-505d-4efb-84d4-e0e3d82fad55 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.437375527Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.514205500Z" level=info msg="Created container a63c4e83173dc9caba9dbe364601be99ee036b409383cbcb3ac84e5921628f60: default/hello-world-app-5d77478584-q58bk/hello-world-app" id=19f815b8-505d-4efb-84d4-e0e3d82fad55 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.514732911Z" level=info msg="Starting container: a63c4e83173dc9caba9dbe364601be99ee036b409383cbcb3ac84e5921628f60" id=1ece66c8-d5e9-4ac7-8267-a0b0f819b2e1 name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:14:15 addons-387581 crio[945]: time="2023-09-11 11:14:15.524240915Z" level=info msg="Started container" PID=9543 containerID=a63c4e83173dc9caba9dbe364601be99ee036b409383cbcb3ac84e5921628f60 description=default/hello-world-app-5d77478584-q58bk/hello-world-app id=1ece66c8-d5e9-4ac7-8267-a0b0f819b2e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b4744e3100aceabe695a34ea0462e60e9796f8227947f4f3f938970bad1c4f2
	Sep 11 11:14:16 addons-387581 crio[945]: time="2023-09-11 11:14:16.834828064Z" level=info msg="Stopping container: 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba (timeout: 2s)" id=0a1f3144-eb7f-4d3e-af04-a0be66123536 name=/runtime.v1.RuntimeService/StopContainer
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.843716473Z" level=warning msg="Stopping container 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=0a1f3144-eb7f-4d3e-af04-a0be66123536 name=/runtime.v1.RuntimeService/StopContainer
	Sep 11 11:14:18 addons-387581 conmon[5475]: conmon 67e965ee1735c1f43899 <ninfo>: container 5487 exited with status 137
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.991927783Z" level=info msg="Stopped container 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba: ingress-nginx/ingress-nginx-controller-798b8b85d7-zkcwf/controller" id=0a1f3144-eb7f-4d3e-af04-a0be66123536 name=/runtime.v1.RuntimeService/StopContainer
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.992444841Z" level=info msg="Stopping pod sandbox: 077dea9893ffb93a5b865c121f1ca47a58703a9fb4ceb8540b77027203f46f14" id=abb92cb3-b112-4c1b-8124-6354088416f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.995434996Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-RDMFCEFRYFMXZILE - [0:0]\n:KUBE-HP-3N2PZQGKC5MZYJWW - [0:0]\n-X KUBE-HP-3N2PZQGKC5MZYJWW\n-X KUBE-HP-RDMFCEFRYFMXZILE\nCOMMIT\n"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.996894509Z" level=info msg="Closing host port tcp:80"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.996939922Z" level=info msg="Closing host port tcp:443"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.998304480Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.998327419Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.998459234Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-798b8b85d7-zkcwf Namespace:ingress-nginx ID:077dea9893ffb93a5b865c121f1ca47a58703a9fb4ceb8540b77027203f46f14 UID:ea4de24a-6609-4e9e-8ca1-9218ec369c84 NetNS:/var/run/netns/2bce4584-27f6-4051-b2e2-b4b415f7170b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 11 11:14:18 addons-387581 crio[945]: time="2023-09-11 11:14:18.998574490Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-798b8b85d7-zkcwf from CNI network \"kindnet\" (type=ptp)"
	Sep 11 11:14:19 addons-387581 crio[945]: time="2023-09-11 11:14:19.035690072Z" level=info msg="Stopped pod sandbox: 077dea9893ffb93a5b865c121f1ca47a58703a9fb4ceb8540b77027203f46f14" id=abb92cb3-b112-4c1b-8124-6354088416f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 11 11:14:19 addons-387581 crio[945]: time="2023-09-11 11:14:19.275887088Z" level=info msg="Removing container: 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba" id=0b3ab4c3-0526-4c4f-b895-4316680b3667 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 11 11:14:19 addons-387581 crio[945]: time="2023-09-11 11:14:19.291145618Z" level=info msg="Removed container 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba: ingress-nginx/ingress-nginx-controller-798b8b85d7-zkcwf/controller" id=0b3ab4c3-0526-4c4f-b895-4316680b3667 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a63c4e83173dc       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      8 seconds ago       Running             hello-world-app           0                   1b4744e3100ac       hello-world-app-5d77478584-q58bk
	3217a2c133831       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   634ebf67313a4       nginx
	8b3fe90139862       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   6acb445412e16       headlamp-699c48fb74-2qvrm
	da335c5753cff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   c9af59986aa8c       gcp-auth-d4c87556c-xnz66
	b8b2bed700ffd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   d8bb4c1a396d1       ingress-nginx-admission-patch-bndjd
	41060a8576402       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   227289c38804e       ingress-nginx-admission-create-95mmv
	67f6b1b1d7ba4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   f586e9a7d5ee6       storage-provisioner
	78eb0953a4520       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   754dd3cf38b89       coredns-5dd5756b68-5fcnk
	82a0f28313e9d       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                             4 minutes ago       Running             kube-proxy                0                   29ae8feb68907       kube-proxy-bkffp
	f0c641e123414       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             4 minutes ago       Running             kindnet-cni               0                   a10dad126e518       kindnet-kpzws
	7538f46c798f7       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                             4 minutes ago       Running             kube-controller-manager   0                   7006cb25a4609       kube-controller-manager-addons-387581
	df1ad99db69c9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   30e86ccc2ea3b       etcd-addons-387581
	6f40798a33834       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                             4 minutes ago       Running             kube-apiserver            0                   4c2506a258cc1       kube-apiserver-addons-387581
	f835e904b5e34       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                             4 minutes ago       Running             kube-scheduler            0                   d9a86a81e04f3       kube-scheduler-addons-387581
	
	* 
	* ==> coredns [78eb0953a4520a3a405f3d0d080d0668b5f9be14bb10785331bd16b4965ee7e7] <==
	* [INFO] 10.244.0.15:42645 - 39549 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113207s
	[INFO] 10.244.0.15:53329 - 17218 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006987845s
	[INFO] 10.244.0.15:53329 - 33095 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.008812039s
	[INFO] 10.244.0.15:37512 - 40364 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005360508s
	[INFO] 10.244.0.15:37512 - 24979 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005612598s
	[INFO] 10.244.0.15:40460 - 7409 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004731575s
	[INFO] 10.244.0.15:40460 - 57853 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007199421s
	[INFO] 10.244.0.15:42100 - 41990 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077482s
	[INFO] 10.244.0.15:42100 - 21253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126063s
	[INFO] 10.244.0.18:44663 - 52499 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188864s
	[INFO] 10.244.0.18:52927 - 7397 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263959s
	[INFO] 10.244.0.18:58613 - 41406 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122151s
	[INFO] 10.244.0.18:45716 - 41212 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137174s
	[INFO] 10.244.0.18:52348 - 38944 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000187516s
	[INFO] 10.244.0.18:50908 - 8663 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000235929s
	[INFO] 10.244.0.18:55675 - 12805 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008629727s
	[INFO] 10.244.0.18:52862 - 40240 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.009224988s
	[INFO] 10.244.0.18:47178 - 11447 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008042364s
	[INFO] 10.244.0.18:39574 - 55461 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008998672s
	[INFO] 10.244.0.18:45967 - 39259 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006889251s
	[INFO] 10.244.0.18:44667 - 10824 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007591521s
	[INFO] 10.244.0.18:46830 - 2406 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000712932s
	[INFO] 10.244.0.18:33722 - 32126 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.000739635s
	[INFO] 10.244.0.20:51172 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092347s
	[INFO] 10.244.0.20:59497 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00004935s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-387581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-387581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=addons-387581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_10_04_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-387581
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:10:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-387581
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:14:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:12:36 +0000   Mon, 11 Sep 2023 11:09:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:12:36 +0000   Mon, 11 Sep 2023 11:09:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:12:36 +0000   Mon, 11 Sep 2023 11:09:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:12:36 +0000   Mon, 11 Sep 2023 11:10:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-387581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 a844a8333716413bb0d0d1d9eedc9cf4
	  System UUID:                af4f5478-0ded-4c13-b8a3-699cc11faebe
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-q58bk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-xnz66                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  headlamp                    headlamp-699c48fb74-2qvrm                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 coredns-5dd5756b68-5fcnk                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m8s
	  kube-system                 etcd-addons-387581                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kindnet-kpzws                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m8s
	  kube-system                 kube-apiserver-addons-387581             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-addons-387581    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-bkffp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-addons-387581             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node addons-387581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node addons-387581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x8 over 4m26s)  kubelet          Node addons-387581 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s                  kubelet          Node addons-387581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s                  kubelet          Node addons-387581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s                  kubelet          Node addons-387581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s                   node-controller  Node addons-387581 event: Registered Node addons-387581 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node addons-387581 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 66 7c 01 e7 48 08 06
	[  +6.553038] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 6a 2b fe 65 5b 08 06
	[Sep11 10:57] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 1a 91 03 49 76 f1 08 06
	[  +1.001124] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000000] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 6c fb d7 fb 29 08 06
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff d2 f4 10 67 f3 6f 08 06
	[  +6.953074] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de b7 dc 51 36 a8 08 06
	[Sep11 11:12] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[  +1.003980] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[  +2.015818] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[  +4.127665] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[  +8.191332] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[ +16.126710] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	[Sep11 11:13] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: ba be 8f 07 f5 cc ee 24 e2 30 b0 7d 08 00
	
	* 
	* ==> etcd [df1ad99db69c97b73dbc068aab0032b95348b80f325c79533401c5547436ad99] <==
	* {"level":"info","ts":"2023-09-11T11:09:59.778673Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:09:59.779414Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-387581 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:09:59.77942Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:09:59.779441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:09:59.779749Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:09:59.779807Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:09:59.779895Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:09:59.780021Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:09:59.780051Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:09:59.781544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T11:09:59.782062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-11T11:10:19.178416Z","caller":"traceutil/trace.go:171","msg":"trace[217028867] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"104.484493ms","start":"2023-09-11T11:10:19.073905Z","end":"2023-09-11T11:10:19.178389Z","steps":["trace[217028867] 'process raft request'  (duration: 103.787468ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:19.178712Z","caller":"traceutil/trace.go:171","msg":"trace[1752552859] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"120.162883ms","start":"2023-09-11T11:10:19.058536Z","end":"2023-09-11T11:10:19.178698Z","steps":["trace[1752552859] 'process raft request'  (duration: 101.596064ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:19.672079Z","caller":"traceutil/trace.go:171","msg":"trace[332667741] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"113.589512ms","start":"2023-09-11T11:10:19.558469Z","end":"2023-09-11T11:10:19.672059Z","steps":["trace[332667741] 'process raft request'  (duration: 15.533621ms)","trace[332667741] 'compare'  (duration: 97.705462ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T11:10:20.078254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.781637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3143"}
	{"level":"info","ts":"2023-09-11T11:10:20.078319Z","caller":"traceutil/trace.go:171","msg":"trace[1984140335] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:433; }","duration":"113.859847ms","start":"2023-09-11T11:10:19.964445Z","end":"2023-09-11T11:10:20.078305Z","steps":["trace[1984140335] 'agreement among raft nodes before linearized reading'  (duration: 113.746629ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.27656Z","caller":"traceutil/trace.go:171","msg":"trace[418273842] linearizableReadLoop","detail":"{readStateIndex:450; appliedIndex:448; }","duration":"112.801157ms","start":"2023-09-11T11:10:20.163741Z","end":"2023-09-11T11:10:20.276543Z","steps":["trace[418273842] 'read index received'  (duration: 7.165288ms)","trace[418273842] 'applied index is now lower than readState.Index'  (duration: 105.63493ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T11:10:20.276827Z","caller":"traceutil/trace.go:171","msg":"trace[2055810612] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"197.268641ms","start":"2023-09-11T11:10:20.079546Z","end":"2023-09-11T11:10:20.276814Z","steps":["trace[2055810612] 'process raft request'  (duration: 191.283345ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.276975Z","caller":"traceutil/trace.go:171","msg":"trace[1605066598] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"113.571373ms","start":"2023-09-11T11:10:20.163395Z","end":"2023-09-11T11:10:20.276966Z","steps":["trace[1605066598] 'process raft request'  (duration: 113.023937ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.277201Z","caller":"traceutil/trace.go:171","msg":"trace[2069908644] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"107.727407ms","start":"2023-09-11T11:10:20.169453Z","end":"2023-09-11T11:10:20.277181Z","steps":["trace[2069908644] 'process raft request'  (duration: 107.012222ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.277336Z","caller":"traceutil/trace.go:171","msg":"trace[581647542] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"107.79285ms","start":"2023-09-11T11:10:20.169535Z","end":"2023-09-11T11:10:20.277328Z","steps":["trace[581647542] 'process raft request'  (duration: 106.971176ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T11:10:20.277496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.757184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-09-11T11:10:20.277524Z","caller":"traceutil/trace.go:171","msg":"trace[896601441] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:438; }","duration":"113.799794ms","start":"2023-09-11T11:10:20.163716Z","end":"2023-09-11T11:10:20.277516Z","steps":["trace[896601441] 'agreement among raft nodes before linearized reading'  (duration: 113.722752ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.46257Z","caller":"traceutil/trace.go:171","msg":"trace[881894238] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"100.475021ms","start":"2023-09-11T11:10:20.362076Z","end":"2023-09-11T11:10:20.462551Z","steps":["trace[881894238] 'process raft request'  (duration: 100.442867ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:10:20.462942Z","caller":"traceutil/trace.go:171","msg":"trace[956190446] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"101.420925ms","start":"2023-09-11T11:10:20.361507Z","end":"2023-09-11T11:10:20.462928Z","steps":["trace[956190446] 'process raft request'  (duration: 14.818406ms)","trace[956190446] 'compare'  (duration: 85.913533ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [da335c5753cffc605111868edc21867e4ae730551f6bc03fb9d202a8e9949dfb] <==
	* 2023/09/11 11:11:27 GCP Auth Webhook started!
	2023/09/11 11:11:33 Ready to marshal response ...
	2023/09/11 11:11:33 Ready to write response ...
	2023/09/11 11:11:33 Ready to marshal response ...
	2023/09/11 11:11:33 Ready to write response ...
	2023/09/11 11:11:33 Ready to marshal response ...
	2023/09/11 11:11:33 Ready to write response ...
	2023/09/11 11:11:43 Ready to marshal response ...
	2023/09/11 11:11:43 Ready to write response ...
	2023/09/11 11:11:43 Ready to marshal response ...
	2023/09/11 11:11:43 Ready to write response ...
	2023/09/11 11:11:50 Ready to marshal response ...
	2023/09/11 11:11:50 Ready to write response ...
	2023/09/11 11:12:05 Ready to marshal response ...
	2023/09/11 11:12:05 Ready to write response ...
	2023/09/11 11:12:40 Ready to marshal response ...
	2023/09/11 11:12:40 Ready to write response ...
	2023/09/11 11:14:13 Ready to marshal response ...
	2023/09/11 11:14:13 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:14:24 up 56 min,  0 users,  load average: 0.36, 1.66, 1.86
	Linux addons-387581 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f0c641e123414afde4a60510f7306b0427cc26917d61bdeaef743f7f105daf72] <==
	* I0911 11:12:20.321466       1 main.go:227] handling current node
	I0911 11:12:30.333256       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:12:30.333279       1 main.go:227] handling current node
	I0911 11:12:40.345622       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:12:40.345645       1 main.go:227] handling current node
	I0911 11:12:50.363609       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:12:50.363642       1 main.go:227] handling current node
	I0911 11:13:00.373236       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:00.373260       1 main.go:227] handling current node
	I0911 11:13:10.377624       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:10.377646       1 main.go:227] handling current node
	I0911 11:13:20.391111       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:20.391134       1 main.go:227] handling current node
	I0911 11:13:30.403267       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:30.403290       1 main.go:227] handling current node
	I0911 11:13:40.414503       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:40.414525       1 main.go:227] handling current node
	I0911 11:13:50.427832       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:13:50.428083       1 main.go:227] handling current node
	I0911 11:14:00.431520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:14:00.431543       1 main.go:227] handling current node
	I0911 11:14:10.444061       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:14:10.444085       1 main.go:227] handling current node
	I0911 11:14:20.448850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:14:20.448872       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6f40798a33834d3fdb4667b833965a102a977c70e71178d008cab6af7b2483cc] <==
	* I0911 11:12:56.393520       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.393740       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:12:56.394249       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.394350       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:12:56.463960       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.464147       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:12:56.471882       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.471951       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:12:56.481716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.481776       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:12:56.485962       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:12:56.485997       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0911 11:12:56.570230       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:12:56.570252       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:12:56.572309       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:12:56.573267       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0911 11:12:57.394700       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0911 11:12:57.486877       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0911 11:12:57.567291       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0911 11:13:00.680939       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0911 11:13:00.680968       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 11:13:00.681003       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 11:13:00.681010       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 11:14:13.978309       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.192.177"}
	
	* 
	* ==> kube-controller-manager [7538f46c798f7ed16323fcae31ae8ba7be16226b556b401181be7a3426367d01] <==
	* E0911 11:13:28.896800       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:13:35.785673       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:13:35.785708       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:13:43.992009       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:13:43.992040       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:13:54.472167       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:13:54.472196       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0911 11:14:13.823805       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0911 11:14:13.834647       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-q58bk"
	I0911 11:14:13.841349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.741491ms"
	I0911 11:14:13.845147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="3.734759ms"
	I0911 11:14:13.845224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.999µs"
	I0911 11:14:13.845258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.095µs"
	I0911 11:14:13.852943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.704µs"
	I0911 11:14:15.823459       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0911 11:14:15.824925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="4.137µs"
	I0911 11:14:15.827671       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0911 11:14:16.281017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.945733ms"
	I0911 11:14:16.281095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.136µs"
	W0911 11:14:17.873135       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:14:17.873164       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:14:19.832912       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:14:19.832952       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:14:21.353917       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:14:21.353960       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [82a0f28313e9d0bcb242af6b597bda6e8e6c3873dc0c4e13ac692b7cb1e34ad4] <==
	* I0911 11:10:19.376040       1 server_others.go:69] "Using iptables proxy"
	I0911 11:10:19.682315       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0911 11:10:20.465838       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0911 11:10:20.658461       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:10:20.658515       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0911 11:10:20.658536       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0911 11:10:20.658565       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:10:20.658817       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:10:20.662557       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:10:20.668606       1 config.go:188] "Starting service config controller"
	I0911 11:10:20.670020       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:10:20.669213       1 config.go:315] "Starting node config controller"
	I0911 11:10:20.670185       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:10:20.669310       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:10:20.670210       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:10:20.772588       1 shared_informer.go:318] Caches are synced for service config
	I0911 11:10:20.772624       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:10:20.772832       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [f835e904b5e3444abf5b874fe43a2c0beda607455f6cd211c28ef964e3b54cab] <==
	* W0911 11:10:01.080103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:10:01.080117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 11:10:01.080151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:10:01.080162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 11:10:01.080194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:01.080208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 11:10:01.080242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:10:01.080257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 11:10:01.080303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:01.080371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 11:10:01.080440       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:10:01.080452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 11:10:01.887183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:10:01.887245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 11:10:01.954561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:01.954594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0911 11:10:02.000439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:02.000475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 11:10:02.058857       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 11:10:02.058895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 11:10:02.148640       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:02.148668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 11:10:02.149785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:10:02.149805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0911 11:10:02.372296       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 11 11:14:13 addons-387581 kubelet[1555]: I0911 11:14:13.958435    1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b2t\" (UniqueName: \"kubernetes.io/projected/57152ee9-c58a-4088-8d4e-44833a9745a5-kube-api-access-24b2t\") pod \"hello-world-app-5d77478584-q58bk\" (UID: \"57152ee9-c58a-4088-8d4e-44833a9745a5\") " pod="default/hello-world-app-5d77478584-q58bk"
	Sep 11 11:14:13 addons-387581 kubelet[1555]: I0911 11:14:13.958503    1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/57152ee9-c58a-4088-8d4e-44833a9745a5-gcp-creds\") pod \"hello-world-app-5d77478584-q58bk\" (UID: \"57152ee9-c58a-4088-8d4e-44833a9745a5\") " pod="default/hello-world-app-5d77478584-q58bk"
	Sep 11 11:14:14 addons-387581 kubelet[1555]: W0911 11:14:14.259074    1555 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/812f30ff51f05cc6e536238e3b6cc088c3aca9c3e85e941d8830b77fbd7b4b2c/crio-1b4744e3100aceabe695a34ea0462e60e9796f8227947f4f3f938970bad1c4f2 WatchSource:0}: Error finding container 1b4744e3100aceabe695a34ea0462e60e9796f8227947f4f3f938970bad1c4f2: Status 404 returned error can't find the container with id 1b4744e3100aceabe695a34ea0462e60e9796f8227947f4f3f938970bad1c4f2
	Sep 11 11:14:14 addons-387581 kubelet[1555]: I0911 11:14:14.964431    1555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xmvs\" (UniqueName: \"kubernetes.io/projected/a7772be0-47af-495f-a091-8eb993378efa-kube-api-access-6xmvs\") pod \"a7772be0-47af-495f-a091-8eb993378efa\" (UID: \"a7772be0-47af-495f-a091-8eb993378efa\") "
	Sep 11 11:14:14 addons-387581 kubelet[1555]: I0911 11:14:14.966279    1555 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7772be0-47af-495f-a091-8eb993378efa-kube-api-access-6xmvs" (OuterVolumeSpecName: "kube-api-access-6xmvs") pod "a7772be0-47af-495f-a091-8eb993378efa" (UID: "a7772be0-47af-495f-a091-8eb993378efa"). InnerVolumeSpecName "kube-api-access-6xmvs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.064638    1555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6xmvs\" (UniqueName: \"kubernetes.io/projected/a7772be0-47af-495f-a091-8eb993378efa-kube-api-access-6xmvs\") on node \"addons-387581\" DevicePath \"\""
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.263976    1555 scope.go:117] "RemoveContainer" containerID="e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.283081    1555 scope.go:117] "RemoveContainer" containerID="e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: E0911 11:14:15.283945    1555 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76\": container with ID starting with e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76 not found: ID does not exist" containerID="e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.283997    1555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76"} err="failed to get container status \"e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76\": rpc error: code = NotFound desc = could not find container \"e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76\": container with ID starting with e49721400dd8bf7d2db617976f2a6068e47125163383ad740977143e87cc3b76 not found: ID does not exist"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.994950    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ba5ea36-7e4e-4e63-84e5-3bfd5bfd97f6" path="/var/lib/kubelet/pods/3ba5ea36-7e4e-4e63-84e5-3bfd5bfd97f6/volumes"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.995319    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81596488-f39a-45ba-b81a-5e63e42d5962" path="/var/lib/kubelet/pods/81596488-f39a-45ba-b81a-5e63e42d5962/volumes"
	Sep 11 11:14:15 addons-387581 kubelet[1555]: I0911 11:14:15.995597    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a7772be0-47af-495f-a091-8eb993378efa" path="/var/lib/kubelet/pods/a7772be0-47af-495f-a091-8eb993378efa/volumes"
	Sep 11 11:14:16 addons-387581 kubelet[1555]: I0911 11:14:16.274984    1555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-q58bk" podStartSLOduration=2.103696944 podCreationTimestamp="2023-09-11 11:14:13 +0000 UTC" firstStartedPulling="2023-09-11 11:14:14.263517482 +0000 UTC m=+250.355142117" lastFinishedPulling="2023-09-11 11:14:15.434769445 +0000 UTC m=+251.526394078" observedRunningTime="2023-09-11 11:14:16.274814786 +0000 UTC m=+252.366439429" watchObservedRunningTime="2023-09-11 11:14:16.274948905 +0000 UTC m=+252.366573546"
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.192987    1555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea4de24a-6609-4e9e-8ca1-9218ec369c84-webhook-cert\") pod \"ea4de24a-6609-4e9e-8ca1-9218ec369c84\" (UID: \"ea4de24a-6609-4e9e-8ca1-9218ec369c84\") "
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.193075    1555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzw48\" (UniqueName: \"kubernetes.io/projected/ea4de24a-6609-4e9e-8ca1-9218ec369c84-kube-api-access-fzw48\") pod \"ea4de24a-6609-4e9e-8ca1-9218ec369c84\" (UID: \"ea4de24a-6609-4e9e-8ca1-9218ec369c84\") "
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.194893    1555 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4de24a-6609-4e9e-8ca1-9218ec369c84-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ea4de24a-6609-4e9e-8ca1-9218ec369c84" (UID: "ea4de24a-6609-4e9e-8ca1-9218ec369c84"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.195093    1555 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4de24a-6609-4e9e-8ca1-9218ec369c84-kube-api-access-fzw48" (OuterVolumeSpecName: "kube-api-access-fzw48") pod "ea4de24a-6609-4e9e-8ca1-9218ec369c84" (UID: "ea4de24a-6609-4e9e-8ca1-9218ec369c84"). InnerVolumeSpecName "kube-api-access-fzw48". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.274883    1555 scope.go:117] "RemoveContainer" containerID="67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba"
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.291390    1555 scope.go:117] "RemoveContainer" containerID="67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba"
	Sep 11 11:14:19 addons-387581 kubelet[1555]: E0911 11:14:19.291766    1555 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba\": container with ID starting with 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba not found: ID does not exist" containerID="67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba"
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.291812    1555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba"} err="failed to get container status \"67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba\": rpc error: code = NotFound desc = could not find container \"67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba\": container with ID starting with 67e965ee1735c1f4389963d586d94bb72642e7cf7cad2e63e7ef71ff11e32fba not found: ID does not exist"
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.294013    1555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fzw48\" (UniqueName: \"kubernetes.io/projected/ea4de24a-6609-4e9e-8ca1-9218ec369c84-kube-api-access-fzw48\") on node \"addons-387581\" DevicePath \"\""
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.294035    1555 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea4de24a-6609-4e9e-8ca1-9218ec369c84-webhook-cert\") on node \"addons-387581\" DevicePath \"\""
	Sep 11 11:14:19 addons-387581 kubelet[1555]: I0911 11:14:19.995046    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea4de24a-6609-4e9e-8ca1-9218ec369c84" path="/var/lib/kubelet/pods/ea4de24a-6609-4e9e-8ca1-9218ec369c84/volumes"
	
	* 
	* ==> storage-provisioner [67f6b1b1d7ba4fcf3fd754a8e363a12e41d4cfbda0614aa89d993a5f933fc56f] <==
	* I0911 11:10:51.274370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:10:51.282637       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:10:51.282682       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:10:51.288525       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:10:51.288578       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc8e4d25-8f08-45db-9e37-ae5c340dd608", APIVersion:"v1", ResourceVersion:"861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-387581_44e5dc18-6e4d-416d-add6-e23b739caba2 became leader
	I0911 11:10:51.288663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-387581_44e5dc18-6e4d-416d-add6-e23b739caba2!
	I0911 11:10:51.389414       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-387581_44e5dc18-6e4d-416d-add6-e23b739caba2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-387581 -n addons-387581
helpers_test.go:261: (dbg) Run:  kubectl --context addons-387581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-452365 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-452365 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.598519587s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-452365 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-452365 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9c5101c3-fc4a-4223-b181-3c2424611bac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9c5101c3-fc4a-4223-b181-3c2424611bac] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.008586094s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0911 11:21:32.891355  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:22:00.576166  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-452365 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.19011833s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-452365 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.004456438s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons disable ingress-dns --alsologtostderr -v=1: (2.340262198s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons disable ingress --alsologtostderr -v=1
E0911 11:22:22.263591  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.268981  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.279304  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.299599  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.340403  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.420699  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.581053  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:22.901437  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:23.542415  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:24.822913  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:22:27.383946  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons disable ingress --alsologtostderr -v=1: (7.394279931s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-452365
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-452365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c",
	        "Created": "2023-09-11T11:18:24.343252949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:18:24.638247375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
	        "ResolvConfPath": "/var/lib/docker/containers/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c/hosts",
	        "LogPath": "/var/lib/docker/containers/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c-json.log",
	        "Name": "/ingress-addon-legacy-452365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-452365:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-452365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a2eefce5820d04c2dfa1b30c25ae211d26663e6ec77ca96fb74cd6431f7547a2-init/diff:/var/lib/docker/overlay2/5fefd4c14d5bc4d7d67c2f6371e7160909b1f4d0d9a655e2a127286f8f0bbb5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2eefce5820d04c2dfa1b30c25ae211d26663e6ec77ca96fb74cd6431f7547a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2eefce5820d04c2dfa1b30c25ae211d26663e6ec77ca96fb74cd6431f7547a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2eefce5820d04c2dfa1b30c25ae211d26663e6ec77ca96fb74cd6431f7547a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-452365",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-452365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-452365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-452365",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-452365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c78e477891d7dd44aa13eecd89de7cf1c251b88ec571c6162c4a3ce07ff731c6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c78e477891d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-452365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c67a9f521032",
	                        "ingress-addon-legacy-452365"
	                    ],
	                    "NetworkID": "4120083f153027cf712dae88902c6fa64261aabd56af8c47d7a9a30c4a80e54f",
	                    "EndpointID": "cd05bcb0a7f22b5e6497e7d8d8e417ba452788953888c0d65fc3fa64d37d461c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-452365 -n ingress-addon-legacy-452365
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-452365 logs -n 25: (1.079153787s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-224127                                                     | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC | 11 Sep 23 11:17 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| service        | functional-224127 service                                             | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC | 11 Sep 23 11:18 UTC |
	|                | hello-node --url                                                      |                             |         |         |                     |                     |
	| ssh            | functional-224127 ssh findmnt                                         | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC |                     |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-224127                                                  | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-224127                                                  | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-224127                                                  | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| image          | functional-224127                                                     | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC | 11 Sep 23 11:17 UTC |
	|                | image ls --format short                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-224127                                                     | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:17 UTC | 11 Sep 23 11:18 UTC |
	|                | image ls --format yaml                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| ssh            | functional-224127 ssh pgrep                                           | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC |                     |
	|                | buildkitd                                                             |                             |         |         |                     |                     |
	| image          | functional-224127                                                     | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | image ls --format json                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-224127                                                     | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | image ls --format table                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-224127 image build -t                                      | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | localhost/my-image:functional-224127                                  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                             |         |         |                     |                     |
	| ssh            | functional-224127 ssh findmnt                                         | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| ssh            | functional-224127 ssh findmnt                                         | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | -T /mount2                                                            |                             |         |         |                     |                     |
	| ssh            | functional-224127 ssh findmnt                                         | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|                | -T /mount3                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-224127                                                  | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC |                     |
	|                | --kill=true                                                           |                             |         |         |                     |                     |
	| image          | functional-224127 image ls                                            | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| delete         | -p functional-224127                                                  | functional-224127           | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| start          | -p ingress-addon-legacy-452365                                        | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:19 UTC |
	|                | --kubernetes-version=v1.18.20                                         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                  |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                              |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-452365                                           | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:19 UTC | 11 Sep 23 11:19 UTC |
	|                | addons enable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-452365                                           | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:19 UTC | 11 Sep 23 11:19 UTC |
	|                | addons enable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-452365                                           | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:19 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-452365 ip                                        | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC | 11 Sep 23 11:22 UTC |
	| addons         | ingress-addon-legacy-452365                                           | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC | 11 Sep 23 11:22 UTC |
	|                | addons disable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-452365                                           | ingress-addon-legacy-452365 | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC | 11 Sep 23 11:22 UTC |
	|                | addons disable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:18:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:18:12.485132  182098 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:18:12.485251  182098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:18:12.485263  182098 out.go:309] Setting ErrFile to fd 2...
	I0911 11:18:12.485267  182098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:18:12.485452  182098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:18:12.486027  182098 out.go:303] Setting JSON to false
	I0911 11:18:12.487005  182098 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3641,"bootTime":1694427452,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:18:12.487064  182098 start.go:138] virtualization: kvm guest
	I0911 11:18:12.489238  182098 out.go:177] * [ingress-addon-legacy-452365] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:18:12.491201  182098 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:18:12.491271  182098 notify.go:220] Checking for updates...
	I0911 11:18:12.492791  182098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:18:12.494561  182098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:18:12.496120  182098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:18:12.497927  182098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:18:12.499552  182098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:18:12.501297  182098 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:18:12.525232  182098 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:18:12.525325  182098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:18:12.578174  182098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-11 11:18:12.569723545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:18:12.578270  182098 docker.go:294] overlay module found
	I0911 11:18:12.580579  182098 out.go:177] * Using the docker driver based on user configuration
	I0911 11:18:12.582105  182098 start.go:298] selected driver: docker
	I0911 11:18:12.582120  182098 start.go:902] validating driver "docker" against <nil>
	I0911 11:18:12.582131  182098 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:18:12.582809  182098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:18:12.642160  182098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-11 11:18:12.633389546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:18:12.642375  182098 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:18:12.642617  182098 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:18:12.644681  182098 out.go:177] * Using Docker driver with root privileges
	I0911 11:18:12.646251  182098 cni.go:84] Creating CNI manager for ""
	I0911 11:18:12.646270  182098 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:18:12.646283  182098 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 11:18:12.646297  182098 start_flags.go:321] config:
	{Name:ingress-addon-legacy-452365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-452365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:18:12.647947  182098 out.go:177] * Starting control plane node ingress-addon-legacy-452365 in cluster ingress-addon-legacy-452365
	I0911 11:18:12.649312  182098 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:18:12.650719  182098 out.go:177] * Pulling base image ...
	I0911 11:18:12.652171  182098 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:18:12.652196  182098 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:18:12.667930  182098 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:18:12.667956  182098 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:18:12.676621  182098 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0911 11:18:12.676649  182098 cache.go:57] Caching tarball of preloaded images
	I0911 11:18:12.676794  182098 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:18:12.678932  182098 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0911 11:18:12.680453  182098 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:18:12.708673  182098 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0911 11:18:16.116581  182098 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:18:16.116679  182098 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:18:17.128971  182098 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0911 11:18:17.129300  182098 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/config.json ...
	I0911 11:18:17.129330  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/config.json: {Name:mkbb404eeea37d1c7f3491765db3bb8f29dea65d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:17.129485  182098 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:18:17.129507  182098 start.go:365] acquiring machines lock for ingress-addon-legacy-452365: {Name:mk70b7ca72e4f77fa1ba6fe711463da98e6db08d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:18:17.129551  182098 start.go:369] acquired machines lock for "ingress-addon-legacy-452365" in 32.888µs
	I0911 11:18:17.129568  182098 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-452365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-452365 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:18:17.129651  182098 start.go:125] createHost starting for "" (driver="docker")
	I0911 11:18:17.131980  182098 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0911 11:18:17.132234  182098 start.go:159] libmachine.API.Create for "ingress-addon-legacy-452365" (driver="docker")
	I0911 11:18:17.132259  182098 client.go:168] LocalClient.Create starting
	I0911 11:18:17.132366  182098 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:18:17.132402  182098 main.go:141] libmachine: Decoding PEM data...
	I0911 11:18:17.132418  182098 main.go:141] libmachine: Parsing certificate...
	I0911 11:18:17.132477  182098 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:18:17.132496  182098 main.go:141] libmachine: Decoding PEM data...
	I0911 11:18:17.132504  182098 main.go:141] libmachine: Parsing certificate...
	I0911 11:18:17.132815  182098 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-452365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:18:17.148933  182098 cli_runner.go:211] docker network inspect ingress-addon-legacy-452365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:18:17.149020  182098 network_create.go:281] running [docker network inspect ingress-addon-legacy-452365] to gather additional debugging logs...
	I0911 11:18:17.149045  182098 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-452365
	W0911 11:18:17.163583  182098 cli_runner.go:211] docker network inspect ingress-addon-legacy-452365 returned with exit code 1
	I0911 11:18:17.163616  182098 network_create.go:284] error running [docker network inspect ingress-addon-legacy-452365]: docker network inspect ingress-addon-legacy-452365: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-452365 not found
	I0911 11:18:17.163641  182098 network_create.go:286] output of [docker network inspect ingress-addon-legacy-452365]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-452365 not found
	
	** /stderr **
	I0911 11:18:17.163697  182098 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:18:17.179741  182098 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015cebc0}
	I0911 11:18:17.179775  182098 network_create.go:123] attempt to create docker network ingress-addon-legacy-452365 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0911 11:18:17.179828  182098 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-452365 ingress-addon-legacy-452365
	I0911 11:18:17.231254  182098 network_create.go:107] docker network ingress-addon-legacy-452365 192.168.49.0/24 created
	I0911 11:18:17.231283  182098 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-452365" container
	I0911 11:18:17.231335  182098 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:18:17.246674  182098 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-452365 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-452365 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:18:17.263484  182098 oci.go:103] Successfully created a docker volume ingress-addon-legacy-452365
	I0911 11:18:17.263565  182098 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-452365-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-452365 --entrypoint /usr/bin/test -v ingress-addon-legacy-452365:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:18:18.990930  182098 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-452365-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-452365 --entrypoint /usr/bin/test -v ingress-addon-legacy-452365:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib: (1.727313608s)
	I0911 11:18:18.990958  182098 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-452365
	I0911 11:18:18.990975  182098 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:18:18.990996  182098 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:18:18.991052  182098 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-452365:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:18:24.271365  182098 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-452365:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (5.280267274s)
	I0911 11:18:24.271397  182098 kic.go:199] duration metric: took 5.280397 seconds to extract preloaded images to volume
	W0911 11:18:24.271526  182098 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:18:24.271604  182098 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:18:24.327323  182098 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-452365 --name ingress-addon-legacy-452365 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-452365 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-452365 --network ingress-addon-legacy-452365 --ip 192.168.49.2 --volume ingress-addon-legacy-452365:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:18:24.647178  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Running}}
	I0911 11:18:24.667418  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:18:24.685161  182098 cli_runner.go:164] Run: docker exec ingress-addon-legacy-452365 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:18:24.729262  182098 oci.go:144] the created container "ingress-addon-legacy-452365" has a running status.
	I0911 11:18:24.729299  182098 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa...
	I0911 11:18:24.810914  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0911 11:18:24.810960  182098 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:18:24.831236  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:18:24.847786  182098 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:18:24.847811  182098 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-452365 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:18:24.907598  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:18:24.924982  182098 machine.go:88] provisioning docker machine ...
	I0911 11:18:24.925020  182098 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-452365"
	I0911 11:18:24.925082  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:24.945117  182098 main.go:141] libmachine: Using SSH client type: native
	I0911 11:18:24.945548  182098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0911 11:18:24.945563  182098 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-452365 && echo "ingress-addon-legacy-452365" | sudo tee /etc/hostname
	I0911 11:18:24.946143  182098 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34650->127.0.0.1:32907: read: connection reset by peer
	I0911 11:18:28.080137  182098 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-452365
	
	I0911 11:18:28.080225  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:28.095653  182098 main.go:141] libmachine: Using SSH client type: native
	I0911 11:18:28.096107  182098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0911 11:18:28.096129  182098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-452365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-452365/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-452365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:18:28.222078  182098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:18:28.222126  182098 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:18:28.222148  182098 ubuntu.go:177] setting up certificates
	I0911 11:18:28.222158  182098 provision.go:83] configureAuth start
	I0911 11:18:28.222217  182098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-452365
	I0911 11:18:28.238198  182098 provision.go:138] copyHostCerts
	I0911 11:18:28.238250  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:18:28.238279  182098 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:18:28.238288  182098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:18:28.238353  182098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:18:28.238425  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:18:28.238443  182098 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:18:28.238450  182098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:18:28.238472  182098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:18:28.238515  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:18:28.238529  182098 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:18:28.238535  182098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:18:28.238554  182098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:18:28.238598  182098 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-452365 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-452365]
	I0911 11:18:28.338074  182098 provision.go:172] copyRemoteCerts
	I0911 11:18:28.338172  182098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:18:28.338221  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:28.355011  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:18:28.446337  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:18:28.446408  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:18:28.468339  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:18:28.468412  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 11:18:28.489397  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:18:28.489456  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:18:28.509887  182098 provision.go:86] duration metric: configureAuth took 287.71421ms
	I0911 11:18:28.509917  182098 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:18:28.510134  182098 config.go:182] Loaded profile config "ingress-addon-legacy-452365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0911 11:18:28.510249  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:28.527000  182098 main.go:141] libmachine: Using SSH client type: native
	I0911 11:18:28.527584  182098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0911 11:18:28.527612  182098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:18:28.762824  182098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:18:28.762852  182098 machine.go:91] provisioned docker machine in 3.837847862s
	I0911 11:18:28.762861  182098 client.go:171] LocalClient.Create took 11.630596409s
	I0911 11:18:28.762880  182098 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-452365" took 11.630645999s
	I0911 11:18:28.762891  182098 start.go:300] post-start starting for "ingress-addon-legacy-452365" (driver="docker")
	I0911 11:18:28.762900  182098 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:18:28.762958  182098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:18:28.763006  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:28.779305  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:18:28.870574  182098 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:18:28.873476  182098 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:18:28.873504  182098 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:18:28.873513  182098 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:18:28.873519  182098 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:18:28.873528  182098 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:18:28.873581  182098 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:18:28.873648  182098 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:18:28.873661  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /etc/ssl/certs/1434172.pem
	I0911 11:18:28.873749  182098 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:18:28.881142  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:18:28.901513  182098 start.go:303] post-start completed in 138.609157ms
	I0911 11:18:28.901830  182098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-452365
	I0911 11:18:28.917893  182098 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/config.json ...
	I0911 11:18:28.918163  182098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:18:28.918205  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:28.933763  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:18:29.022693  182098 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:18:29.026730  182098 start.go:128] duration metric: createHost completed in 11.897065387s
	I0911 11:18:29.026762  182098 start.go:83] releasing machines lock for "ingress-addon-legacy-452365", held for 11.897200677s
	I0911 11:18:29.026835  182098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-452365
	I0911 11:18:29.042746  182098 ssh_runner.go:195] Run: cat /version.json
	I0911 11:18:29.042786  182098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:18:29.042809  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:29.042842  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:18:29.059599  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:18:29.060463  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:18:29.231394  182098 ssh_runner.go:195] Run: systemctl --version
	I0911 11:18:29.235450  182098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:18:29.371251  182098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:18:29.375369  182098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:18:29.393163  182098 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:18:29.393239  182098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:18:29.419778  182098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:18:29.419799  182098 start.go:466] detecting cgroup driver to use...
	I0911 11:18:29.419826  182098 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:18:29.419864  182098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:18:29.433407  182098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:18:29.443028  182098 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:18:29.443079  182098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:18:29.455293  182098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:18:29.467811  182098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:18:29.543244  182098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:18:29.622287  182098 docker.go:212] disabling docker service ...
	I0911 11:18:29.622346  182098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:18:29.639257  182098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:18:29.649370  182098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:18:29.730956  182098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:18:29.811406  182098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:18:29.821499  182098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:18:29.835406  182098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0911 11:18:29.835468  182098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:18:29.843826  182098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:18:29.843889  182098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:18:29.852400  182098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:18:29.860579  182098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:18:29.868749  182098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:18:29.876409  182098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:18:29.883532  182098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:18:29.890535  182098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:18:29.969915  182098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:18:30.060506  182098 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:18:30.060572  182098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:18:30.063813  182098 start.go:534] Will wait 60s for crictl version
	I0911 11:18:30.063864  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:30.066754  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:18:30.097920  182098 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:18:30.097997  182098 ssh_runner.go:195] Run: crio --version
	I0911 11:18:30.129820  182098 ssh_runner.go:195] Run: crio --version
	I0911 11:18:30.166074  182098 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0911 11:18:30.167394  182098 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-452365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:18:30.183281  182098 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0911 11:18:30.186543  182098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:18:30.196465  182098 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:18:30.196518  182098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:18:30.240882  182098 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0911 11:18:30.240937  182098 ssh_runner.go:195] Run: which lz4
	I0911 11:18:30.244165  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0911 11:18:30.244242  182098 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 11:18:30.247729  182098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:18:30.247760  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0911 11:18:31.177264  182098 crio.go:444] Took 0.933043 seconds to copy over tarball
	I0911 11:18:31.177319  182098 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 11:18:33.497876  182098 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.320528537s)
	I0911 11:18:33.497901  182098 crio.go:451] Took 2.320615 seconds to extract the tarball
	I0911 11:18:33.497910  182098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 11:18:33.567473  182098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:18:33.599409  182098 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0911 11:18:33.599430  182098 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 11:18:33.599484  182098 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:18:33.599504  182098 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:18:33.599529  182098 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:18:33.599558  182098 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:18:33.599573  182098 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0911 11:18:33.599558  182098 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:18:33.599648  182098 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:18:33.599667  182098 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0911 11:18:33.600897  182098 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0911 11:18:33.600905  182098 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:18:33.600925  182098 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0911 11:18:33.600901  182098 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:18:33.600965  182098 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:18:33.600898  182098 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:18:33.601097  182098 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:18:33.600947  182098 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:18:33.749055  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:18:33.756410  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:18:33.756511  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0911 11:18:33.768228  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:18:33.768567  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0911 11:18:33.769598  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:18:33.797799  182098 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0911 11:18:33.797839  182098 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:18:33.797881  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.814201  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0911 11:18:33.871580  182098 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0911 11:18:33.871635  182098 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:18:33.871687  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.871801  182098 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0911 11:18:33.871832  182098 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:18:33.871863  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.877192  182098 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0911 11:18:33.877240  182098 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:18:33.877285  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.883490  182098 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0911 11:18:33.883532  182098 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0911 11:18:33.883568  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.885524  182098 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0911 11:18:33.885540  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:18:33.885565  182098 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:18:33.885612  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.961060  182098 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0911 11:18:33.961110  182098 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0911 11:18:33.961163  182098 ssh_runner.go:195] Run: which crictl
	I0911 11:18:33.961175  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0911 11:18:33.961209  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:18:33.961277  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:18:33.961334  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0911 11:18:33.961405  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:18:33.984969  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0911 11:18:34.073807  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0911 11:18:34.073881  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0911 11:18:34.075781  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0911 11:18:34.075850  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0911 11:18:34.075898  182098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0911 11:18:34.075936  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0911 11:18:34.106538  182098 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0911 11:18:34.218936  182098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:18:34.353384  182098 cache_images.go:92] LoadImages completed in 753.937627ms
	W0911 11:18:34.353487  182098 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0911 11:18:34.353570  182098 ssh_runner.go:195] Run: crio config
	I0911 11:18:34.394172  182098 cni.go:84] Creating CNI manager for ""
	I0911 11:18:34.394202  182098 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:18:34.394234  182098 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:18:34.394256  182098 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-452365 NodeName:ingress-addon-legacy-452365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 11:18:34.394405  182098 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-452365"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:18:34.394526  182098 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-452365 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-452365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:18:34.394586  182098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0911 11:18:34.402594  182098 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:18:34.402658  182098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:18:34.410384  182098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0911 11:18:34.425596  182098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0911 11:18:34.441243  182098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0911 11:18:34.456772  182098 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:18:34.460020  182098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:18:34.469506  182098 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365 for IP: 192.168.49.2
	I0911 11:18:34.469546  182098 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.469724  182098 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:18:34.469767  182098 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:18:34.469811  182098 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key
	I0911 11:18:34.469832  182098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt with IP's: []
	I0911 11:18:34.534608  182098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt ...
	I0911 11:18:34.534639  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: {Name:mk0edc5db61534234a73da581633c98ed9299610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.534808  182098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key ...
	I0911 11:18:34.534819  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key: {Name:mk95cebf6dc13758b5f48bada89adf47b495d6ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.534889  182098 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key.dd3b5fb2
	I0911 11:18:34.534903  182098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:18:34.763285  182098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt.dd3b5fb2 ...
	I0911 11:18:34.763318  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt.dd3b5fb2: {Name:mke7e61c772c18aca2ce16b98513e4332921bc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.763611  182098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key.dd3b5fb2 ...
	I0911 11:18:34.763641  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key.dd3b5fb2: {Name:mk82eb2a4a9e367914642440bbae79d46930c4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.763750  182098 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt
	I0911 11:18:34.763840  182098 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key
	I0911 11:18:34.763891  182098 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.key
	I0911 11:18:34.763905  182098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.crt with IP's: []
	I0911 11:18:34.948751  182098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.crt ...
	I0911 11:18:34.948783  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.crt: {Name:mk613460de9d47a3606bbc9778c8611dacf16d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.948937  182098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.key ...
	I0911 11:18:34.948948  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.key: {Name:mk80c13b6dd6a56212f6181f9727f6467820903d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:34.949022  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 11:18:34.949051  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 11:18:34.949066  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 11:18:34.949077  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 11:18:34.949092  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:18:34.949104  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:18:34.949116  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:18:34.949126  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:18:34.949171  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:18:34.949207  182098 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:18:34.949217  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:18:34.949243  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:18:34.949266  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:18:34.949287  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:18:34.949337  182098 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:18:34.949367  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem -> /usr/share/ca-certificates/143417.pem
	I0911 11:18:34.949379  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /usr/share/ca-certificates/1434172.pem
	I0911 11:18:34.949391  182098 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:18:34.949927  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:18:34.971546  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:18:34.993384  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:18:35.014943  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 11:18:35.036126  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:18:35.056989  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:18:35.077969  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:18:35.098624  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:18:35.119655  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:18:35.141236  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:18:35.162438  182098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:18:35.183206  182098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:18:35.199398  182098 ssh_runner.go:195] Run: openssl version
	I0911 11:18:35.204390  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:18:35.213235  182098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:18:35.216887  182098 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:18:35.216946  182098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:18:35.223494  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:18:35.232673  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:18:35.241580  182098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:18:35.245026  182098 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:18:35.245101  182098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:18:35.251625  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:18:35.260629  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:18:35.269696  182098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:18:35.273038  182098 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:18:35.273090  182098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:18:35.279527  182098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:18:35.288358  182098 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:18:35.291624  182098 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:18:35.291710  182098 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-452365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-452365 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:18:35.291795  182098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:18:35.291838  182098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:18:35.324910  182098 cri.go:89] found id: ""
	I0911 11:18:35.324981  182098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:18:35.333180  182098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:18:35.341082  182098 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:18:35.341138  182098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:18:35.348948  182098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:18:35.348989  182098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:18:35.391248  182098 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0911 11:18:35.391332  182098 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:18:35.428559  182098 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:18:35.428619  182098 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:18:35.428670  182098 kubeadm.go:322] OS: Linux
	I0911 11:18:35.428720  182098 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:18:35.428762  182098 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:18:35.428847  182098 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:18:35.428929  182098 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:18:35.428980  182098 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:18:35.429029  182098 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:18:35.496183  182098 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:18:35.496359  182098 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:18:35.496504  182098 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:18:35.671117  182098 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:18:35.672094  182098 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:18:35.672234  182098 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:18:35.754748  182098 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:18:35.757872  182098 out.go:204]   - Generating certificates and keys ...
	I0911 11:18:35.758056  182098 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:18:35.758158  182098 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:18:35.835841  182098 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:18:35.964147  182098 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:18:36.229592  182098 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:18:36.342700  182098 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:18:36.387875  182098 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:18:36.388052  182098 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-452365 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0911 11:18:36.467423  182098 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:18:36.467609  182098 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-452365 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0911 11:18:36.614149  182098 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:18:36.673873  182098 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:18:36.755032  182098 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:18:36.755145  182098 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:18:37.063897  182098 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:18:37.198678  182098 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:18:37.440202  182098 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:18:37.507420  182098 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:18:37.508044  182098 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:18:37.510049  182098 out.go:204]   - Booting up control plane ...
	I0911 11:18:37.510194  182098 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:18:37.513542  182098 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:18:37.514525  182098 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:18:37.515281  182098 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:18:37.518316  182098 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:18:44.520544  182098 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002283 seconds
	I0911 11:18:44.520698  182098 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:18:44.531307  182098 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:18:45.046657  182098 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:18:45.046837  182098 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-452365 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0911 11:18:45.554323  182098 kubeadm.go:322] [bootstrap-token] Using token: 8wb7lm.7lbk14q48xdk6uw3
	I0911 11:18:45.555882  182098 out.go:204]   - Configuring RBAC rules ...
	I0911 11:18:45.556039  182098 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:18:45.559669  182098 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:18:45.565803  182098 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:18:45.567835  182098 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:18:45.569723  182098 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:18:45.571502  182098 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:18:45.578146  182098 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:18:45.735490  182098 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:18:45.967208  182098 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:18:45.968233  182098 kubeadm.go:322] 
	I0911 11:18:45.968321  182098 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:18:45.968337  182098 kubeadm.go:322] 
	I0911 11:18:45.968420  182098 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:18:45.968427  182098 kubeadm.go:322] 
	I0911 11:18:45.968446  182098 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:18:45.968526  182098 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:18:45.968604  182098 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:18:45.968623  182098 kubeadm.go:322] 
	I0911 11:18:45.968692  182098 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:18:45.968811  182098 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:18:45.968905  182098 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:18:45.968913  182098 kubeadm.go:322] 
	I0911 11:18:45.969025  182098 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:18:45.969126  182098 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:18:45.969143  182098 kubeadm.go:322] 
	I0911 11:18:45.969237  182098 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8wb7lm.7lbk14q48xdk6uw3 \
	I0911 11:18:45.969320  182098 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:18:45.969340  182098 kubeadm.go:322]     --control-plane 
	I0911 11:18:45.969346  182098 kubeadm.go:322] 
	I0911 11:18:45.969456  182098 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:18:45.969466  182098 kubeadm.go:322] 
	I0911 11:18:45.969556  182098 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8wb7lm.7lbk14q48xdk6uw3 \
	I0911 11:18:45.969701  182098 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:18:45.971320  182098 kubeadm.go:322] W0911 11:18:35.390673    1378 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0911 11:18:45.971593  182098 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:18:45.971727  182098 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:18:45.971877  182098 kubeadm.go:322] W0911 11:18:37.513264    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 11:18:45.972024  182098 kubeadm.go:322] W0911 11:18:37.514359    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 11:18:45.972053  182098 cni.go:84] Creating CNI manager for ""
	I0911 11:18:45.972066  182098 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:18:45.974783  182098 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:18:45.976089  182098 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:18:45.979680  182098 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0911 11:18:45.979697  182098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:18:45.995800  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:18:46.443543  182098 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:18:46.443644  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=ingress-addon-legacy-452365 minikube.k8s.io/updated_at=2023_09_11T11_18_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:46.443647  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:46.521733  182098 ops.go:34] apiserver oom_adj: -16
	I0911 11:18:46.521869  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:46.620251  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:47.185650  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:47.685158  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:48.185423  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:48.685986  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:49.185678  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:49.685524  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:50.185829  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:50.685659  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:51.185742  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:51.685140  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:52.185369  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:52.685995  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:53.185424  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:53.685853  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:54.185249  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:54.685172  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:55.185875  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:55.685273  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:56.185518  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:56.686044  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:57.185980  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:57.685627  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:58.186108  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:58.685339  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:59.185439  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:18:59.685319  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:00.185391  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:00.685379  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:01.185852  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:01.685756  182098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:01.779459  182098 kubeadm.go:1081] duration metric: took 15.335895147s to wait for elevateKubeSystemPrivileges.
	I0911 11:19:01.779505  182098 kubeadm.go:406] StartCluster complete in 26.48780204s
	I0911 11:19:01.779542  182098 settings.go:142] acquiring lock: {Name:mk01327a907b1ed5b7990abeca4c89109d2bed5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:01.779629  182098 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:19:01.780315  182098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/kubeconfig: {Name:mk3da3a5a3a5d35dd9d56a597907266732eec114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:01.780564  182098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:19:01.780658  182098 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:19:01.780729  182098 config.go:182] Loaded profile config "ingress-addon-legacy-452365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0911 11:19:01.780750  182098 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-452365"
	I0911 11:19:01.780771  182098 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-452365"
	I0911 11:19:01.780836  182098 host.go:66] Checking if "ingress-addon-legacy-452365" exists ...
	I0911 11:19:01.780772  182098 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-452365"
	I0911 11:19:01.780869  182098 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-452365"
	I0911 11:19:01.781069  182098 kapi.go:59] client config for ingress-addon-legacy-452365: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:01.781177  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:19:01.781347  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:19:01.781835  182098 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 11:19:01.801580  182098 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-452365" context rescaled to 1 replicas
	I0911 11:19:01.801631  182098 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:19:01.803502  182098 out.go:177] * Verifying Kubernetes components...
	I0911 11:19:01.805375  182098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:19:01.807038  182098 kapi.go:59] client config for ingress-addon-legacy-452365: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:01.813591  182098 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:19:01.812839  182098 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-452365"
	I0911 11:19:01.815059  182098 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:19:01.815075  182098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:19:01.815146  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:19:01.815080  182098 host.go:66] Checking if "ingress-addon-legacy-452365" exists ...
	I0911 11:19:01.815603  182098 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-452365 --format={{.State.Status}}
	I0911 11:19:01.832677  182098 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:19:01.832706  182098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:19:01.832772  182098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-452365
	I0911 11:19:01.832867  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:19:01.848964  182098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/ingress-addon-legacy-452365/id_rsa Username:docker}
	I0911 11:19:01.994450  182098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 11:19:01.995109  182098 kapi.go:59] client config for ingress-addon-legacy-452365: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:01.995472  182098 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-452365" to be "Ready" ...
	I0911 11:19:02.078707  182098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:19:02.183422  182098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:19:02.398132  182098 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0911 11:19:02.575324  182098 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0911 11:19:02.576950  182098 addons.go:502] enable addons completed in 796.288845ms: enabled=[default-storageclass storage-provisioner]
	I0911 11:19:04.005460  182098 node_ready.go:58] node "ingress-addon-legacy-452365" has status "Ready":"False"
	I0911 11:19:06.006440  182098 node_ready.go:58] node "ingress-addon-legacy-452365" has status "Ready":"False"
	I0911 11:19:06.540648  182098 node_ready.go:49] node "ingress-addon-legacy-452365" has status "Ready":"True"
	I0911 11:19:06.540675  182098 node_ready.go:38] duration metric: took 4.545178928s waiting for node "ingress-addon-legacy-452365" to be "Ready" ...
	I0911 11:19:06.540684  182098 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:19:06.564863  182098 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:08.572748  182098 pod_ready.go:102] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-11 11:19:01 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0911 11:19:10.574925  182098 pod_ready.go:102] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace has status "Ready":"False"
	I0911 11:19:12.575675  182098 pod_ready.go:102] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace has status "Ready":"False"
	I0911 11:19:14.575979  182098 pod_ready.go:102] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace has status "Ready":"False"
	I0911 11:19:17.075369  182098 pod_ready.go:102] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace has status "Ready":"False"
	I0911 11:19:17.574858  182098 pod_ready.go:92] pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.574884  182098 pod_ready.go:81] duration metric: took 11.009992938s waiting for pod "coredns-66bff467f8-ss9lq" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.574897  182098 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.579169  182098 pod_ready.go:92] pod "etcd-ingress-addon-legacy-452365" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.579195  182098 pod_ready.go:81] duration metric: took 4.290771ms waiting for pod "etcd-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.579211  182098 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.583339  182098 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-452365" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.583360  182098 pod_ready.go:81] duration metric: took 4.140727ms waiting for pod "kube-apiserver-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.583372  182098 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.587086  182098 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-452365" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.587105  182098 pod_ready.go:81] duration metric: took 3.726287ms waiting for pod "kube-controller-manager-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.587115  182098 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkp8h" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.590990  182098 pod_ready.go:92] pod "kube-proxy-fkp8h" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.591013  182098 pod_ready.go:81] duration metric: took 3.891159ms waiting for pod "kube-proxy-fkp8h" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.591025  182098 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.770475  182098 request.go:629] Waited for 179.349904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-452365
	I0911 11:19:17.970908  182098 request.go:629] Waited for 197.370183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-452365
	I0911 11:19:17.973751  182098 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-452365" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:17.973777  182098 pod_ready.go:81] duration metric: took 382.741499ms waiting for pod "kube-scheduler-ingress-addon-legacy-452365" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:17.973796  182098 pod_ready.go:38] duration metric: took 11.433082086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:19:17.973815  182098 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:19:17.973880  182098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:19:17.984620  182098 api_server.go:72] duration metric: took 16.18295549s to wait for apiserver process to appear ...
	I0911 11:19:17.984641  182098 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:19:17.984666  182098 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0911 11:19:17.989681  182098 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0911 11:19:17.990506  182098 api_server.go:141] control plane version: v1.18.20
	I0911 11:19:17.990529  182098 api_server.go:131] duration metric: took 5.88237ms to wait for apiserver health ...
	I0911 11:19:17.990537  182098 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:19:18.170944  182098 request.go:629] Waited for 180.341779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:18.176484  182098 system_pods.go:59] 8 kube-system pods found
	I0911 11:19:18.176516  182098 system_pods.go:61] "coredns-66bff467f8-ss9lq" [d36f6bea-eb9d-4d37-81ae-79ae887d6d37] Running
	I0911 11:19:18.176521  182098 system_pods.go:61] "etcd-ingress-addon-legacy-452365" [531d2183-7d83-43fd-a135-df54fdd89841] Running
	I0911 11:19:18.176525  182098 system_pods.go:61] "kindnet-sfnxz" [a3b2c9cc-6f40-406c-acab-6d36e32ef5fe] Running
	I0911 11:19:18.176533  182098 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-452365" [631b6d5a-e280-4367-b6e1-9ac3347fa872] Running
	I0911 11:19:18.176538  182098 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-452365" [29bf136f-7e72-4ec1-9209-022469a2d9d1] Running
	I0911 11:19:18.176542  182098 system_pods.go:61] "kube-proxy-fkp8h" [0766b7b0-562d-45f7-a572-85ff8cae8e6b] Running
	I0911 11:19:18.176546  182098 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-452365" [99728129-4ec8-4f1d-8a5c-9411e6af4bdc] Running
	I0911 11:19:18.176550  182098 system_pods.go:61] "storage-provisioner" [19513088-988c-4095-b76d-e0d921d77f4a] Running
	I0911 11:19:18.176556  182098 system_pods.go:74] duration metric: took 186.014411ms to wait for pod list to return data ...
	I0911 11:19:18.176573  182098 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:19:18.371027  182098 request.go:629] Waited for 194.373179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:19:18.373374  182098 default_sa.go:45] found service account: "default"
	I0911 11:19:18.373399  182098 default_sa.go:55] duration metric: took 196.819588ms for default service account to be created ...
	I0911 11:19:18.373408  182098 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:19:18.570800  182098 request.go:629] Waited for 197.330486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:18.576439  182098 system_pods.go:86] 8 kube-system pods found
	I0911 11:19:18.576466  182098 system_pods.go:89] "coredns-66bff467f8-ss9lq" [d36f6bea-eb9d-4d37-81ae-79ae887d6d37] Running
	I0911 11:19:18.576474  182098 system_pods.go:89] "etcd-ingress-addon-legacy-452365" [531d2183-7d83-43fd-a135-df54fdd89841] Running
	I0911 11:19:18.576479  182098 system_pods.go:89] "kindnet-sfnxz" [a3b2c9cc-6f40-406c-acab-6d36e32ef5fe] Running
	I0911 11:19:18.576486  182098 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-452365" [631b6d5a-e280-4367-b6e1-9ac3347fa872] Running
	I0911 11:19:18.576495  182098 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-452365" [29bf136f-7e72-4ec1-9209-022469a2d9d1] Running
	I0911 11:19:18.576506  182098 system_pods.go:89] "kube-proxy-fkp8h" [0766b7b0-562d-45f7-a572-85ff8cae8e6b] Running
	I0911 11:19:18.576511  182098 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-452365" [99728129-4ec8-4f1d-8a5c-9411e6af4bdc] Running
	I0911 11:19:18.576517  182098 system_pods.go:89] "storage-provisioner" [19513088-988c-4095-b76d-e0d921d77f4a] Running
	I0911 11:19:18.576527  182098 system_pods.go:126] duration metric: took 203.112596ms to wait for k8s-apps to be running ...
	I0911 11:19:18.576542  182098 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:19:18.576606  182098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:19:18.587015  182098 system_svc.go:56] duration metric: took 10.469117ms WaitForService to wait for kubelet.
	I0911 11:19:18.587040  182098 kubeadm.go:581] duration metric: took 16.785378498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:19:18.587065  182098 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:19:18.770473  182098 request.go:629] Waited for 183.332283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0911 11:19:18.773255  182098 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0911 11:19:18.773297  182098 node_conditions.go:123] node cpu capacity is 8
	I0911 11:19:18.773309  182098 node_conditions.go:105] duration metric: took 186.240042ms to run NodePressure ...
	I0911 11:19:18.773321  182098 start.go:228] waiting for startup goroutines ...
	I0911 11:19:18.773327  182098 start.go:233] waiting for cluster config update ...
	I0911 11:19:18.773336  182098 start.go:242] writing updated cluster config ...
	I0911 11:19:18.773629  182098 ssh_runner.go:195] Run: rm -f paused
	I0911 11:19:18.820108  182098 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0911 11:19:18.822276  182098 out.go:177] 
	W0911 11:19:18.823783  182098 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0911 11:19:18.825415  182098 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0911 11:19:18.827022  182098 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-452365" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 11 11:22:06 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:06.983777780Z" level=info msg="Started container" PID=4890 containerID=579dd7d2ea6ec746a3aaac1d7ab329e0066c690d7bb38302feb761d682b6772c description=default/hello-world-app-5f5d8b66bb-mjff2/hello-world-app id=4899fe23-916c-4021-8f17-9a00faa98d9d name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=10725859ce779036635fe41d76b9ccd34da9b27be731389c9d9287f49c988f1e
	Sep 11 11:22:07 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:07.076709391Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=7fabbdad-c284-4caf-b7c3-594e36e7311d name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.077721794Z" level=info msg="Stopping pod sandbox: e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=f72e93f0-a3c5-4405-b872-4bf539aae6ea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.084721852Z" level=info msg="Stopped pod sandbox: e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=f72e93f0-a3c5-4405-b872-4bf539aae6ea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.087493325Z" level=info msg="Stopping pod sandbox: e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=80ffa454-5303-4fb0-88eb-1d7c8ba2a5e5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.087544640Z" level=info msg="Stopped pod sandbox (already stopped): e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=80ffa454-5303-4fb0-88eb-1d7c8ba2a5e5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.836808041Z" level=info msg="Stopping container: 588dfe03f7e982748812fe4bb924cbaf8e0cbcd5b6792a15b0990cac65281a8e (timeout: 2s)" id=4290ef40-37d2-4844-93c8-b7d9640dbf1b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 11 11:22:22 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:22.841732017Z" level=info msg="Stopping container: 588dfe03f7e982748812fe4bb924cbaf8e0cbcd5b6792a15b0990cac65281a8e (timeout: 2s)" id=e6bc1d7e-e850-4328-b9d0-bbc8857f25fc name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 11 11:22:24 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:24.076354771Z" level=info msg="Stopping pod sandbox: e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=79fe90d5-5eeb-4b76-b723-995b7244bc29 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:24 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:24.076402075Z" level=info msg="Stopped pod sandbox (already stopped): e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" id=79fe90d5-5eeb-4b76-b723-995b7244bc29 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:24 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:24.850165043Z" level=warning msg="Stopping container 588dfe03f7e982748812fe4bb924cbaf8e0cbcd5b6792a15b0990cac65281a8e with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=4290ef40-37d2-4844-93c8-b7d9640dbf1b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 11 11:22:24 ingress-addon-legacy-452365 conmon[3425]: conmon 588dfe03f7e982748812 <ninfo>: container 3437 exited with status 137
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.013051169Z" level=info msg="Stopped container 588dfe03f7e982748812fe4bb924cbaf8e0cbcd5b6792a15b0990cac65281a8e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5vs5z/controller" id=e6bc1d7e-e850-4328-b9d0-bbc8857f25fc name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.013076214Z" level=info msg="Stopped container 588dfe03f7e982748812fe4bb924cbaf8e0cbcd5b6792a15b0990cac65281a8e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5vs5z/controller" id=4290ef40-37d2-4844-93c8-b7d9640dbf1b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.013717725Z" level=info msg="Stopping pod sandbox: 70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a" id=f86d248c-0782-483e-a50b-b95cb2250304 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.013738608Z" level=info msg="Stopping pod sandbox: 70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a" id=8cdc0995-bd8a-4b0b-8de2-b442c2dc06b3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.016774880Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YWQVBLRBIYK6MZOE - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-HL6NDAC6EUNEM7KM - [0:0]\n-X KUBE-HP-HL6NDAC6EUNEM7KM\n-X KUBE-HP-YWQVBLRBIYK6MZOE\nCOMMIT\n"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.018129553Z" level=info msg="Closing host port tcp:80"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.018170280Z" level=info msg="Closing host port tcp:443"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.019146155Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.019168878Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.019293310Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-5vs5z Namespace:ingress-nginx ID:70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a UID:7c313d77-7956-4afc-a4ed-5d2e3df0815e NetNS:/var/run/netns/1e841e9f-ffe0-46b3-898e-f8bf26da4ccf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.019411625Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-5vs5z from CNI network \"kindnet\" (type=ptp)"
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.051785242Z" level=info msg="Stopped pod sandbox: 70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a" id=f86d248c-0782-483e-a50b-b95cb2250304 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 11 11:22:25 ingress-addon-legacy-452365 crio[961]: time="2023-09-11 11:22:25.051932617Z" level=info msg="Stopped pod sandbox (already stopped): 70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a" id=8cdc0995-bd8a-4b0b-8de2-b442c2dc06b3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	579dd7d2ea6ec       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            23 seconds ago      Running             hello-world-app           0                   10725859ce779       hello-world-app-5f5d8b66bb-mjff2
	4367d2455155c       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   c866617ac4e77       nginx
	588dfe03f7e98       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   70807047f28bd       ingress-nginx-controller-7fcf777cb7-5vs5z
	e21fdf620ba10       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   41cf2b4ebb915       ingress-nginx-admission-patch-djc4z
	67bb51d042d2d       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   3f35fa89e8846       ingress-nginx-admission-create-pln8q
	9aa773cf6d863       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   15b3030a997a8       storage-provisioner
	a596de68736ca       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   c881823d85f9a       coredns-66bff467f8-ss9lq
	0fa20764bbb05       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   e89b445389283       kindnet-sfnxz
	dec9ce3bcb44d       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   c68f9d0dd38bb       kube-proxy-fkp8h
	6216ec7f06b97       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   96eb9af3d89b4       kube-scheduler-ingress-addon-legacy-452365
	619d314e47fa8       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   45a2406691184       kube-controller-manager-ingress-addon-legacy-452365
	15b9727c7c72b       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   efd64e024cfa6       kube-apiserver-ingress-addon-legacy-452365
	063902960b3fb       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   946ecf84ba3ac       etcd-ingress-addon-legacy-452365
	
	* 
	* ==> coredns [a596de68736caa9628c44cafd0441557ea7491f655053231c5ff62cb73a71d31] <==
	* [INFO] 10.244.0.5:47106 - 23374 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005281601s
	[INFO] 10.244.0.5:48397 - 43716 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00475978s
	[INFO] 10.244.0.5:54579 - 15649 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004816485s
	[INFO] 10.244.0.5:47880 - 15079 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004806508s
	[INFO] 10.244.0.5:33511 - 2217 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005121807s
	[INFO] 10.244.0.5:38131 - 26992 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005015862s
	[INFO] 10.244.0.5:44938 - 38735 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005011679s
	[INFO] 10.244.0.5:47106 - 16499 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004879435s
	[INFO] 10.244.0.5:55917 - 38121 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004810583s
	[INFO] 10.244.0.5:48397 - 19325 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004973464s
	[INFO] 10.244.0.5:38131 - 17959 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004683287s
	[INFO] 10.244.0.5:54579 - 30576 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004979961s
	[INFO] 10.244.0.5:33511 - 14516 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004855715s
	[INFO] 10.244.0.5:48397 - 65028 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059182s
	[INFO] 10.244.0.5:47880 - 16797 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00504625s
	[INFO] 10.244.0.5:55917 - 31684 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004636703s
	[INFO] 10.244.0.5:47106 - 17018 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004744082s
	[INFO] 10.244.0.5:38131 - 2945 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052558s
	[INFO] 10.244.0.5:54579 - 2302 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056921s
	[INFO] 10.244.0.5:44938 - 22489 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005004694s
	[INFO] 10.244.0.5:33511 - 41372 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000139992s
	[INFO] 10.244.0.5:47880 - 4396 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000123622s
	[INFO] 10.244.0.5:47106 - 28068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098795s
	[INFO] 10.244.0.5:44938 - 1566 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000084447s
	[INFO] 10.244.0.5:55917 - 18798 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055545s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-452365
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-452365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=ingress-addon-legacy-452365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_18_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-452365
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:22:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:22:16 +0000   Mon, 11 Sep 2023 11:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:22:16 +0000   Mon, 11 Sep 2023 11:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:22:16 +0000   Mon, 11 Sep 2023 11:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:22:16 +0000   Mon, 11 Sep 2023 11:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-452365
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 83fa4bd2fba74be1ae8b2d3ce85c437b
	  System UUID:                8d691f22-bcd3-42ac-9865-cee9a977a685
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-mjff2                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-ss9lq                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m29s
	  kube-system                 etcd-ingress-addon-legacy-452365                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kindnet-sfnxz                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m29s
	  kube-system                 kube-apiserver-ingress-addon-legacy-452365             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-452365    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-fkp8h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-scheduler-ingress-addon-legacy-452365             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m44s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s  kubelet     Node ingress-addon-legacy-452365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s  kubelet     Node ingress-addon-legacy-452365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s  kubelet     Node ingress-addon-legacy-452365 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m24s  kubelet     Node ingress-addon-legacy-452365 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=0000000025153437{9p.inode} n=0000000009a7faf7
	[  +0.007362] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.279006] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006749] FS-Cache: O-cookie d=0000000025153437{9p.inode} n=00000000114ccba9
	[  +0.007360] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004933] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006586] FS-Cache: N-cookie d=0000000025153437{9p.inode} n=000000003d6afd37
	[  +0.007368] FS-Cache: N-key=[8] '0690130200000000'
	[Sep11 11:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep11 11:19] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +1.028025] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +2.015874] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[Sep11 11:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +8.187363] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[ +16.126692] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[ +33.789234] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	
	* 
	* ==> etcd [063902960b3fb43b51c54dfd972a5251e112b40bd45e3d5bb8c6033d93748d92] <==
	* raft2023/09/11 11:18:38 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/11 11:18:38 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-11 11:18:38.888388 W | auth: simple token is not cryptographically signed
	2023-09-11 11:18:38.892245 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-11 11:18:38.894258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-11 11:18:38.894438 I | embed: listening for peers on 192.168.49.2:2380
	2023-09-11 11:18:38.894471 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-11 11:18:38.894576 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-11 11:18:38.894830 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/11 11:18:38 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/11 11:18:38 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-11 11:18:38.981546 I | etcdserver: published {Name:ingress-addon-legacy-452365 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-11 11:18:38.981633 I | embed: ready to serve client requests
	2023-09-11 11:18:38.981722 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-11 11:18:38.981919 I | embed: ready to serve client requests
	2023-09-11 11:18:38.983646 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-11 11:18:38.983791 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-11 11:18:38.984487 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-11 11:18:38.996513 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  11:22:30 up  1:04,  0 users,  load average: 0.10, 0.80, 1.40
	Linux ingress-addon-legacy-452365 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [0fa20764bbb05be0a9f66e7648f3ac0e2b03cc3a503715cf75a88bd55197b23a] <==
	* I0911 11:20:26.107764       1 main.go:227] handling current node
	I0911 11:20:36.111251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:20:36.111274       1 main.go:227] handling current node
	I0911 11:20:46.121151       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:20:46.121178       1 main.go:227] handling current node
	I0911 11:20:56.133157       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:20:56.133188       1 main.go:227] handling current node
	I0911 11:21:06.136624       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:06.136647       1 main.go:227] handling current node
	I0911 11:21:16.144957       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:16.144982       1 main.go:227] handling current node
	I0911 11:21:26.153127       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:26.153160       1 main.go:227] handling current node
	I0911 11:21:36.156750       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:36.156774       1 main.go:227] handling current node
	I0911 11:21:46.169028       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:46.169057       1 main.go:227] handling current node
	I0911 11:21:56.172558       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:21:56.172585       1 main.go:227] handling current node
	I0911 11:22:06.178352       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:22:06.178382       1 main.go:227] handling current node
	I0911 11:22:16.190000       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:22:16.190025       1 main.go:227] handling current node
	I0911 11:22:26.193673       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0911 11:22:26.193712       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [15b9727c7c72b954c630c0374cbf7eea0aadae572870edb0eb5f3d1f4a287c3b] <==
	* E0911 11:18:42.858855       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0911 11:18:42.958351       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:18:42.958354       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:18:42.958371       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0911 11:18:42.958385       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0911 11:18:42.958451       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:18:43.850369       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0911 11:18:43.850401       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0911 11:18:43.855210       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0911 11:18:43.858191       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:18:43.858209       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0911 11:18:44.219781       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:18:44.247310       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0911 11:18:44.383160       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0911 11:18:44.384014       1 controller.go:609] quota admission added evaluator for: endpoints
	I0911 11:18:44.386814       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:18:45.139504       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0911 11:18:45.726523       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0911 11:18:45.958466       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0911 11:18:46.058919       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:19:01.171634       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0911 11:19:01.263901       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0911 11:19:19.472892       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0911 11:19:41.997107       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0911 11:22:22.851174       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [619d314e47fa8c24afb16d7fcbe6d029791f95c7a50092105b93291df3029202] <==
	* E0911 11:19:01.375172       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bfad94ec-52dd-471e-bd5c-5b897250b11b", ResourceVersion:"216", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830027926, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-syste
m\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230511-dc714da8\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\
",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a804c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a804e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a80500), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:
(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a80520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardA
PI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a80540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsV
olumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a80560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentD
isk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), S
caleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230511-dc714da8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a80580)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a805c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.
ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-lo
g", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a80500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014762d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00093d2d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1
.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e6e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001476320)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0911 11:19:01.400660       1 shared_informer.go:230] Caches are synced for endpoint 
	I0911 11:19:01.629918       1 shared_informer.go:230] Caches are synced for namespace 
	I0911 11:19:01.640631       1 shared_informer.go:230] Caches are synced for service account 
	I0911 11:19:01.740726       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:19:01.745133       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:19:01.758218       1 shared_informer.go:230] Caches are synced for attach detach 
	I0911 11:19:01.758231       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:19:01.758364       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0911 11:19:01.789983       1 shared_informer.go:230] Caches are synced for expand 
	I0911 11:19:01.789984       1 shared_informer.go:230] Caches are synced for PV protection 
	I0911 11:19:01.805385       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d44fe2fc-ec2e-4969-8c32-d9e1c50c7daa", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0911 11:19:01.858237       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0911 11:19:01.858468       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:19:01.878981       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"bc690908-0278-4ae0-aa0e-423d694d8acd", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-l7b6f
	I0911 11:19:06.259950       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0911 11:19:19.464495       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b1fd2516-5ec4-41de-8cb1-bab373109484", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0911 11:19:19.471158       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"d0ede932-9811-4c40-89d9-7c7d4ff2a370", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5vs5z
	I0911 11:19:19.485684       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"282901a9-768e-4e1e-a046-6f098551df6a", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-pln8q
	I0911 11:19:19.559951       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ef700f59-b72d-429d-b417-3255294dbf0c", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-djc4z
	I0911 11:19:24.209575       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ef700f59-b72d-429d-b417-3255294dbf0c", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:19:24.219242       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"282901a9-768e-4e1e-a046-6f098551df6a", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:22:04.557495       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"3052097a-899d-4677-8249-01b662ddb196", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0911 11:22:04.564677       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"0472bf23-f720-47d2-b205-0f480da01d7a", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-mjff2
	
	* 
	* ==> kube-proxy [dec9ce3bcb44d62fe38b95751eebbea9da91d30860d69b055b99e34ca7426863] <==
	* W0911 11:19:02.073036       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0911 11:19:02.083547       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0911 11:19:02.083585       1 server_others.go:186] Using iptables Proxier.
	I0911 11:19:02.084002       1 server.go:583] Version: v1.18.20
	I0911 11:19:02.084690       1 config.go:133] Starting endpoints config controller
	I0911 11:19:02.084713       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0911 11:19:02.084730       1 config.go:315] Starting service config controller
	I0911 11:19:02.084752       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0911 11:19:02.184966       1 shared_informer.go:230] Caches are synced for service config 
	I0911 11:19:02.185051       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [6216ec7f06b979227c93e2d43af8aefbd71137f0ad39c0bc0d37d82fcd41591a] <==
	* I0911 11:18:42.963377       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:18:42.966073       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0911 11:18:42.966219       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0911 11:18:42.966311       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:18:42.967514       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0911 11:18:42.970132       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:18:42.970248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:18:42.970356       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:18:42.975545       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:18:42.975844       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:18:42.975994       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:18:42.976073       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:18:42.976125       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:18:42.976235       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:18:42.976239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:18:42.976371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:18:42.976439       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:18:43.959778       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:18:43.959816       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:18:44.058500       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:18:44.062214       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:18:44.078172       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:18:44.112765       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:18:44.128436       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0911 11:18:47.267740       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Sep 11 11:21:41 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:21:41.077231    1860 pod_workers.go:191] Error syncing pod 32793939-2cec-48e3-81df-06fdb47aed8d ("kube-ingress-dns-minikube_kube-system(32793939-2cec-48e3-81df-06fdb47aed8d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 11 11:21:56 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:21:56.077220    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:21:56 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:21:56.077267    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:21:56 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:21:56.077331    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:21:56 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:21:56.077371    1860 pod_workers.go:191] Error syncing pod 32793939-2cec-48e3-81df-06fdb47aed8d ("kube-ingress-dns-minikube_kube-system(32793939-2cec-48e3-81df-06fdb47aed8d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 11 11:22:04 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:04.570363    1860 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 11 11:22:04 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:04.758666    1860 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-m2twz" (UniqueName: "kubernetes.io/secret/4fc17bc7-b25e-40ad-9396-651d9114390c-default-token-m2twz") pod "hello-world-app-5f5d8b66bb-mjff2" (UID: "4fc17bc7-b25e-40ad-9396-651d9114390c")
	Sep 11 11:22:04 ingress-addon-legacy-452365 kubelet[1860]: W0911 11:22:04.920128    1860 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/c67a9f521032edb57ed44d753210d912028f588a92e9e54c19f31e144832953c/crio-10725859ce779036635fe41d76b9ccd34da9b27be731389c9d9287f49c988f1e WatchSource:0}: Error finding container 10725859ce779036635fe41d76b9ccd34da9b27be731389c9d9287f49c988f1e: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000b045a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Sep 11 11:22:07 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:07.077056    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:22:07 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:07.077109    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:22:07 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:07.077167    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 11 11:22:07 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:07.077203    1860 pod_workers.go:191] Error syncing pod 32793939-2cec-48e3-81df-06fdb47aed8d ("kube-ingress-dns-minikube_kube-system(32793939-2cec-48e3-81df-06fdb47aed8d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 11 11:22:20 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:20.396802    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7gqwv" (UniqueName: "kubernetes.io/secret/32793939-2cec-48e3-81df-06fdb47aed8d-minikube-ingress-dns-token-7gqwv") pod "32793939-2cec-48e3-81df-06fdb47aed8d" (UID: "32793939-2cec-48e3-81df-06fdb47aed8d")
	Sep 11 11:22:20 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:20.399302    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32793939-2cec-48e3-81df-06fdb47aed8d-minikube-ingress-dns-token-7gqwv" (OuterVolumeSpecName: "minikube-ingress-dns-token-7gqwv") pod "32793939-2cec-48e3-81df-06fdb47aed8d" (UID: "32793939-2cec-48e3-81df-06fdb47aed8d"). InnerVolumeSpecName "minikube-ingress-dns-token-7gqwv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:22:20 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:20.497263    1860 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7gqwv" (UniqueName: "kubernetes.io/secret/32793939-2cec-48e3-81df-06fdb47aed8d-minikube-ingress-dns-token-7gqwv") on node "ingress-addon-legacy-452365" DevicePath ""
	Sep 11 11:22:22 ingress-addon-legacy-452365 kubelet[1860]: W0911 11:22:22.530949    1860 pod_container_deletor.go:77] Container "e11ed0248b69da7690540c98e4dd6beb0332a50ca79cc1f96922c342f7ef9c2c" not found in pod's containers
	Sep 11 11:22:22 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:22.837812    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5vs5z.1783d4542e0cf8a1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5vs5z", UID:"7c313d77-7956-4afc-a4ed-5d2e3df0815e", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-452365"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137dbfbb1da2ca1, ext:217147050256, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137dbfbb1da2ca1, ext:217147050256, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5vs5z.1783d4542e0cf8a1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:22:22 ingress-addon-legacy-452365 kubelet[1860]: E0911 11:22:22.844487    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5vs5z.1783d4542e0cf8a1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5vs5z", UID:"7c313d77-7956-4afc-a4ed-5d2e3df0815e", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-452365"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137dbfbb1da2ca1, ext:217147050256, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137dbfbb2275f98, ext:217152109567, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5vs5z.1783d4542e0cf8a1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:22:25 ingress-addon-legacy-452365 kubelet[1860]: W0911 11:22:25.537049    1860 pod_container_deletor.go:77] Container "70807047f28bdc5848c5ec22621e66af8908b3e0893f4c6307d95b34a574b36a" not found in pod's containers
	Sep 11 11:22:26 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:26.969131    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-webhook-cert") pod "7c313d77-7956-4afc-a4ed-5d2e3df0815e" (UID: "7c313d77-7956-4afc-a4ed-5d2e3df0815e")
	Sep 11 11:22:26 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:26.969190    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-qqbhk" (UniqueName: "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-ingress-nginx-token-qqbhk") pod "7c313d77-7956-4afc-a4ed-5d2e3df0815e" (UID: "7c313d77-7956-4afc-a4ed-5d2e3df0815e")
	Sep 11 11:22:26 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:26.971140    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-ingress-nginx-token-qqbhk" (OuterVolumeSpecName: "ingress-nginx-token-qqbhk") pod "7c313d77-7956-4afc-a4ed-5d2e3df0815e" (UID: "7c313d77-7956-4afc-a4ed-5d2e3df0815e"). InnerVolumeSpecName "ingress-nginx-token-qqbhk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:22:26 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:26.971534    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7c313d77-7956-4afc-a4ed-5d2e3df0815e" (UID: "7c313d77-7956-4afc-a4ed-5d2e3df0815e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:22:27 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:27.069492    1860 reconciler.go:319] Volume detached for volume "ingress-nginx-token-qqbhk" (UniqueName: "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-ingress-nginx-token-qqbhk") on node "ingress-addon-legacy-452365" DevicePath ""
	Sep 11 11:22:27 ingress-addon-legacy-452365 kubelet[1860]: I0911 11:22:27.069540    1860 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7c313d77-7956-4afc-a4ed-5d2e3df0815e-webhook-cert") on node "ingress-addon-legacy-452365" DevicePath ""
	
	* 
	* ==> storage-provisioner [9aa773cf6d86380b8688a7014eb34fd4582283295d3b28c61913acb28861dfad] <==
	* I0911 11:19:11.213040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:19:11.221407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:19:11.221455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:19:11.226536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:19:11.226670       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a92391d2-e374-43ab-b712-1c434ce23df4", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-452365_a942a7d2-5b50-4245-8959-30ebede58710 became leader
	I0911 11:19:11.226708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-452365_a942a7d2-5b50-4245-8959-30ebede58710!
	I0911 11:19:11.327284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-452365_a942a7d2-5b50-4245-8959-30ebede58710!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-452365 -n ingress-addon-legacy-452365
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-452365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- sh -c "ping -c 1 192.168.58.1": exit status 1 (183.637367ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-l4r9c): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (173.805991ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-qrkdr): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-517978
helpers_test.go:235: (dbg) docker inspect multinode-517978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1",
	        "Created": "2023-09-11T11:27:32.667549592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:27:32.940602502Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
	        "ResolvConfPath": "/var/lib/docker/containers/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/hosts",
	        "LogPath": "/var/lib/docker/containers/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1-json.log",
	        "Name": "/multinode-517978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-517978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-517978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dea76705181e6d29b96ccb970e590ebb59e7341f5b48acc59f382645f4d85c07-init/diff:/var/lib/docker/overlay2/5fefd4c14d5bc4d7d67c2f6371e7160909b1f4d0d9a655e2a127286f8f0bbb5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dea76705181e6d29b96ccb970e590ebb59e7341f5b48acc59f382645f4d85c07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dea76705181e6d29b96ccb970e590ebb59e7341f5b48acc59f382645f4d85c07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dea76705181e6d29b96ccb970e590ebb59e7341f5b48acc59f382645f4d85c07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-517978",
	                "Source": "/var/lib/docker/volumes/multinode-517978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-517978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-517978",
	                "name.minikube.sigs.k8s.io": "multinode-517978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec56cae9a8ce3bd1905ba26bd1583f5fe59ba7f4cc85c0c6693e31a8eb26e82d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec56cae9a8ce",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-517978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e320f19ce0c",
	                        "multinode-517978"
	                    ],
	                    "NetworkID": "40f62e59100ca79aa7118f681c41bfb4dcfe42e9cc2e60f0907256821b65e193",
	                    "EndpointID": "6358c5976963002b724eb22a870beed17dcf3010d0d14f512daeb9aea96cbf4d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-517978 -n multinode-517978
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-517978 logs -n 25: (1.355103552s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-055789                           | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-055789 ssh -- ls                    | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-040317                           | mount-start-1-040317 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-055789 ssh -- ls                    | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-055789                           | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	| start   | -p mount-start-2-055789                           | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	| ssh     | mount-start-2-055789 ssh -- ls                    | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-055789                           | mount-start-2-055789 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	| delete  | -p mount-start-1-040317                           | mount-start-1-040317 | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:27 UTC |
	| start   | -p multinode-517978                               | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:27 UTC | 11 Sep 23 11:29 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- apply -f                   | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- rollout                    | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- get pods -o                | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- get pods -o                | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-l4r9c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-qrkdr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-l4r9c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-qrkdr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-l4r9c -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-qrkdr -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- get pods -o                | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-l4r9c                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC |                     |
	|         | busybox-5bc68d56bd-l4r9c -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC | 11 Sep 23 11:29 UTC |
	|         | busybox-5bc68d56bd-qrkdr                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-517978 -- exec                       | multinode-517978     | jenkins | v1.31.2 | 11 Sep 23 11:29 UTC |                     |
	|         | busybox-5bc68d56bd-qrkdr -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:27:26.740606  227744 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:27:26.740725  227744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:27:26.740734  227744 out.go:309] Setting ErrFile to fd 2...
	I0911 11:27:26.740739  227744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:27:26.740946  227744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:27:26.741510  227744 out.go:303] Setting JSON to false
	I0911 11:27:26.742682  227744 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4195,"bootTime":1694427452,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:27:26.742736  227744 start.go:138] virtualization: kvm guest
	I0911 11:27:26.745051  227744 out.go:177] * [multinode-517978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:27:26.746893  227744 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:27:26.746953  227744 notify.go:220] Checking for updates...
	I0911 11:27:26.748240  227744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:27:26.749914  227744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:27:26.751474  227744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:27:26.753086  227744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:27:26.754555  227744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:27:26.755979  227744 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:27:26.777822  227744 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:27:26.777936  227744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:27:26.838324  227744 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-11 11:27:26.829082073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:27:26.838419  227744 docker.go:294] overlay module found
	I0911 11:27:26.841087  227744 out.go:177] * Using the docker driver based on user configuration
	I0911 11:27:26.842325  227744 start.go:298] selected driver: docker
	I0911 11:27:26.842337  227744 start.go:902] validating driver "docker" against <nil>
	I0911 11:27:26.842347  227744 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:27:26.843018  227744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:27:26.901683  227744 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-11 11:27:26.893767228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:27:26.901828  227744 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:27:26.902018  227744 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:27:26.903751  227744 out.go:177] * Using Docker driver with root privileges
	I0911 11:27:26.905160  227744 cni.go:84] Creating CNI manager for ""
	I0911 11:27:26.905172  227744 cni.go:136] 0 nodes found, recommending kindnet
	I0911 11:27:26.905179  227744 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 11:27:26.905188  227744 start_flags.go:321] config:
	{Name:multinode-517978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:27:26.906738  227744 out.go:177] * Starting control plane node multinode-517978 in cluster multinode-517978
	I0911 11:27:26.907872  227744 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:27:26.909189  227744 out.go:177] * Pulling base image ...
	I0911 11:27:26.910375  227744 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:27:26.910393  227744 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:27:26.910415  227744 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:27:26.910429  227744 cache.go:57] Caching tarball of preloaded images
	I0911 11:27:26.910513  227744 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:27:26.910527  227744 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:27:26.910845  227744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json ...
	I0911 11:27:26.910870  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json: {Name:mkfeb3e65b998cd6b24f919b326e1183f4dcffaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:26.926641  227744 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:27:26.926663  227744 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:27:26.926682  227744 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:27:26.926721  227744 start.go:365] acquiring machines lock for multinode-517978: {Name:mk75722219a1fd7a413dbf1e6cf26da09861b225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:27:26.926821  227744 start.go:369] acquired machines lock for "multinode-517978" in 79.95µs
	I0911 11:27:26.926851  227744 start.go:93] Provisioning new machine with config: &{Name:multinode-517978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:27:26.926947  227744 start.go:125] createHost starting for "" (driver="docker")
	I0911 11:27:26.928888  227744 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0911 11:27:26.929080  227744 start.go:159] libmachine.API.Create for "multinode-517978" (driver="docker")
	I0911 11:27:26.929112  227744 client.go:168] LocalClient.Create starting
	I0911 11:27:26.929183  227744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:27:26.929213  227744 main.go:141] libmachine: Decoding PEM data...
	I0911 11:27:26.929236  227744 main.go:141] libmachine: Parsing certificate...
	I0911 11:27:26.929292  227744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:27:26.929311  227744 main.go:141] libmachine: Decoding PEM data...
	I0911 11:27:26.929319  227744 main.go:141] libmachine: Parsing certificate...
	I0911 11:27:26.929603  227744 cli_runner.go:164] Run: docker network inspect multinode-517978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:27:26.945261  227744 cli_runner.go:211] docker network inspect multinode-517978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:27:26.945338  227744 network_create.go:281] running [docker network inspect multinode-517978] to gather additional debugging logs...
	I0911 11:27:26.945362  227744 cli_runner.go:164] Run: docker network inspect multinode-517978
	W0911 11:27:26.960419  227744 cli_runner.go:211] docker network inspect multinode-517978 returned with exit code 1
	I0911 11:27:26.960449  227744 network_create.go:284] error running [docker network inspect multinode-517978]: docker network inspect multinode-517978: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-517978 not found
	I0911 11:27:26.960464  227744 network_create.go:286] output of [docker network inspect multinode-517978]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-517978 not found
	
	** /stderr **
	I0911 11:27:26.960512  227744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:27:26.976095  227744 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20e875ef8442 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d7:c6:0a:5c} reservation:<nil>}
	I0911 11:27:26.976556  227744 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017defe0}
	I0911 11:27:26.976585  227744 network_create.go:123] attempt to create docker network multinode-517978 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0911 11:27:26.976626  227744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-517978 multinode-517978
	I0911 11:27:27.026005  227744 network_create.go:107] docker network multinode-517978 192.168.58.0/24 created
	I0911 11:27:27.026035  227744 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-517978" container
	I0911 11:27:27.026155  227744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:27:27.041387  227744 cli_runner.go:164] Run: docker volume create multinode-517978 --label name.minikube.sigs.k8s.io=multinode-517978 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:27:27.058761  227744 oci.go:103] Successfully created a docker volume multinode-517978
	I0911 11:27:27.058840  227744 cli_runner.go:164] Run: docker run --rm --name multinode-517978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-517978 --entrypoint /usr/bin/test -v multinode-517978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:27:27.549478  227744 oci.go:107] Successfully prepared a docker volume multinode-517978
	I0911 11:27:27.549504  227744 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:27:27.549526  227744 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:27:27.549664  227744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-517978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:27:32.601333  227744 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-517978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (5.051609791s)
	I0911 11:27:32.601369  227744 kic.go:199] duration metric: took 5.051838 seconds to extract preloaded images to volume
	W0911 11:27:32.601513  227744 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:27:32.601621  227744 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:27:32.653358  227744 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-517978 --name multinode-517978 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-517978 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-517978 --network multinode-517978 --ip 192.168.58.2 --volume multinode-517978:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:27:32.948251  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Running}}
	I0911 11:27:32.965427  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:27:32.982247  227744 cli_runner.go:164] Run: docker exec multinode-517978 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:27:33.033731  227744 oci.go:144] the created container "multinode-517978" has a running status.
	I0911 11:27:33.033760  227744 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa...
	I0911 11:27:33.129985  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0911 11:27:33.130037  227744 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:27:33.150363  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:27:33.165962  227744 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:27:33.165984  227744 kic_runner.go:114] Args: [docker exec --privileged multinode-517978 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:27:33.227465  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:27:33.248855  227744 machine.go:88] provisioning docker machine ...
	I0911 11:27:33.248896  227744 ubuntu.go:169] provisioning hostname "multinode-517978"
	I0911 11:27:33.248963  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:33.265281  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:27:33.265990  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I0911 11:27:33.266021  227744 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517978 && echo "multinode-517978" | sudo tee /etc/hostname
	I0911 11:27:33.266773  227744 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49630->127.0.0.1:32967: read: connection reset by peer
	I0911 11:27:36.404827  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517978
	
	I0911 11:27:36.404920  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:36.423045  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:27:36.423495  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I0911 11:27:36.423517  227744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517978/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:27:36.550142  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:27:36.550176  227744 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:27:36.550207  227744 ubuntu.go:177] setting up certificates
	I0911 11:27:36.550219  227744 provision.go:83] configureAuth start
	I0911 11:27:36.550278  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978
	I0911 11:27:36.566517  227744 provision.go:138] copyHostCerts
	I0911 11:27:36.566559  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:27:36.566595  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:27:36.566604  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:27:36.566677  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:27:36.566768  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:27:36.566789  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:27:36.566798  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:27:36.566834  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:27:36.566904  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:27:36.566927  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:27:36.566936  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:27:36.566964  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:27:36.567038  227744 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.multinode-517978 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-517978]
	I0911 11:27:36.694004  227744 provision.go:172] copyRemoteCerts
	I0911 11:27:36.694066  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:27:36.694118  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:36.710621  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:27:36.802218  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:27:36.802288  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0911 11:27:36.823500  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:27:36.823568  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:27:36.843775  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:27:36.843833  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:27:36.864535  227744 provision.go:86] duration metric: configureAuth took 314.301143ms
	I0911 11:27:36.864564  227744 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:27:36.864769  227744 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:27:36.864867  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:36.880992  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:27:36.881385  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I0911 11:27:36.881404  227744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:27:37.091556  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:27:37.091580  227744 machine.go:91] provisioned docker machine in 3.842698612s
	I0911 11:27:37.091590  227744 client.go:171] LocalClient.Create took 10.162468531s
	I0911 11:27:37.091610  227744 start.go:167] duration metric: libmachine.API.Create for "multinode-517978" took 10.162529311s
	I0911 11:27:37.091619  227744 start.go:300] post-start starting for "multinode-517978" (driver="docker")
	I0911 11:27:37.091630  227744 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:27:37.091702  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:27:37.091750  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:37.109767  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:27:37.202911  227744 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:27:37.205972  227744 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0911 11:27:37.205987  227744 command_runner.go:130] > NAME="Ubuntu"
	I0911 11:27:37.205993  227744 command_runner.go:130] > VERSION_ID="22.04"
	I0911 11:27:37.205998  227744 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0911 11:27:37.206006  227744 command_runner.go:130] > VERSION_CODENAME=jammy
	I0911 11:27:37.206011  227744 command_runner.go:130] > ID=ubuntu
	I0911 11:27:37.206020  227744 command_runner.go:130] > ID_LIKE=debian
	I0911 11:27:37.206027  227744 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0911 11:27:37.206038  227744 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0911 11:27:37.206047  227744 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0911 11:27:37.206062  227744 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0911 11:27:37.206076  227744 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0911 11:27:37.206151  227744 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:27:37.206180  227744 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:27:37.206188  227744 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:27:37.206197  227744 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:27:37.206207  227744 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:27:37.206262  227744 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:27:37.206327  227744 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:27:37.206335  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /etc/ssl/certs/1434172.pem
	I0911 11:27:37.206409  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:27:37.213922  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:27:37.235219  227744 start.go:303] post-start completed in 143.586587ms
	I0911 11:27:37.235574  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978
	I0911 11:27:37.252026  227744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json ...
	I0911 11:27:37.252273  227744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:27:37.252328  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:37.270219  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:27:37.358515  227744 command_runner.go:130] > 23%!
	(MISSING)I0911 11:27:37.358705  227744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:27:37.362815  227744 command_runner.go:130] > 226G
	I0911 11:27:37.362854  227744 start.go:128] duration metric: createHost completed in 10.435897472s
	I0911 11:27:37.362865  227744 start.go:83] releasing machines lock for "multinode-517978", held for 10.436031358s
	I0911 11:27:37.362940  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978
	I0911 11:27:37.380616  227744 ssh_runner.go:195] Run: cat /version.json
	I0911 11:27:37.380677  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:37.380685  227744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:27:37.380746  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:27:37.398037  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:27:37.399060  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:27:37.568068  227744 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:27:37.570108  227744 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1693938323-17174", "minikube_version": "v1.31.2", "commit": "e8bff23b977c2baf6422c2e845727c4f6ee4a326"}
	I0911 11:27:37.570257  227744 ssh_runner.go:195] Run: systemctl --version
	I0911 11:27:37.574562  227744 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0911 11:27:37.574604  227744 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0911 11:27:37.574672  227744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:27:37.709607  227744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:27:37.713793  227744 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0911 11:27:37.713819  227744 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0911 11:27:37.713827  227744 command_runner.go:130] > Device: 34h/52d	Inode: 4167201     Links: 1
	I0911 11:27:37.713841  227744 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:27:37.713852  227744 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0911 11:27:37.713860  227744 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0911 11:27:37.713869  227744 command_runner.go:130] > Change: 2023-09-11 11:09:30.748271188 +0000
	I0911 11:27:37.713881  227744 command_runner.go:130] >  Birth: 2023-09-11 11:09:30.748271188 +0000
	I0911 11:27:37.713946  227744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:27:37.731539  227744 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:27:37.731612  227744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:27:37.759679  227744 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0911 11:27:37.759726  227744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:27:37.759743  227744 start.go:466] detecting cgroup driver to use...
	I0911 11:27:37.759777  227744 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:27:37.759819  227744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:27:37.773191  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:27:37.783205  227744 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:27:37.783263  227744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:27:37.795476  227744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:27:37.807920  227744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:27:37.883834  227744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:27:37.959251  227744 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0911 11:27:37.959283  227744 docker.go:212] disabling docker service ...
	I0911 11:27:37.959331  227744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:27:37.976655  227744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:27:37.987319  227744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:27:37.997558  227744 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0911 11:27:38.064144  227744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:27:38.074841  227744 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0911 11:27:38.145252  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:27:38.156092  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:27:38.169806  227744 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:27:38.170509  227744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:27:38.170577  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:27:38.179270  227744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:27:38.179332  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:27:38.187856  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:27:38.196159  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:27:38.204539  227744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:27:38.212636  227744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:27:38.219119  227744 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0911 11:27:38.219777  227744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:27:38.226850  227744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:27:38.302750  227744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:27:38.391960  227744 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:27:38.392037  227744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:27:38.395269  227744 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:27:38.395293  227744 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:27:38.395303  227744 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0911 11:27:38.395318  227744 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:27:38.395330  227744 command_runner.go:130] > Access: 2023-09-11 11:27:38.378835196 +0000
	I0911 11:27:38.395342  227744 command_runner.go:130] > Modify: 2023-09-11 11:27:38.378835196 +0000
	I0911 11:27:38.395353  227744 command_runner.go:130] > Change: 2023-09-11 11:27:38.378835196 +0000
	I0911 11:27:38.395359  227744 command_runner.go:130] >  Birth: -
	I0911 11:27:38.395381  227744 start.go:534] Will wait 60s for crictl version
	I0911 11:27:38.395422  227744 ssh_runner.go:195] Run: which crictl
	I0911 11:27:38.398220  227744 command_runner.go:130] > /usr/bin/crictl
	I0911 11:27:38.398285  227744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:27:38.427326  227744 command_runner.go:130] > Version:  0.1.0
	I0911 11:27:38.427351  227744 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:27:38.427356  227744 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0911 11:27:38.427361  227744 command_runner.go:130] > RuntimeApiVersion:  v1
	I0911 11:27:38.429101  227744 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:27:38.429181  227744 ssh_runner.go:195] Run: crio --version
	I0911 11:27:38.460018  227744 command_runner.go:130] > crio version 1.24.6
	I0911 11:27:38.460043  227744 command_runner.go:130] > Version:          1.24.6
	I0911 11:27:38.460063  227744 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0911 11:27:38.460071  227744 command_runner.go:130] > GitTreeState:     clean
	I0911 11:27:38.460078  227744 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0911 11:27:38.460084  227744 command_runner.go:130] > GoVersion:        go1.18.2
	I0911 11:27:38.460088  227744 command_runner.go:130] > Compiler:         gc
	I0911 11:27:38.460092  227744 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:27:38.460102  227744 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:27:38.460113  227744 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:27:38.460119  227744 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:27:38.460124  227744 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:27:38.461542  227744 ssh_runner.go:195] Run: crio --version
	I0911 11:27:38.493412  227744 command_runner.go:130] > crio version 1.24.6
	I0911 11:27:38.493430  227744 command_runner.go:130] > Version:          1.24.6
	I0911 11:27:38.493437  227744 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0911 11:27:38.493445  227744 command_runner.go:130] > GitTreeState:     clean
	I0911 11:27:38.493451  227744 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0911 11:27:38.493455  227744 command_runner.go:130] > GoVersion:        go1.18.2
	I0911 11:27:38.493459  227744 command_runner.go:130] > Compiler:         gc
	I0911 11:27:38.493464  227744 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:27:38.493469  227744 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:27:38.493476  227744 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:27:38.493485  227744 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:27:38.493489  227744 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:27:38.495760  227744 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:27:38.497356  227744 cli_runner.go:164] Run: docker network inspect multinode-517978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:27:38.513821  227744 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0911 11:27:38.517469  227744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:27:38.527296  227744 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:27:38.527353  227744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:27:38.574159  227744 command_runner.go:130] > {
	I0911 11:27:38.574184  227744 command_runner.go:130] >   "images": [
	I0911 11:27:38.574193  227744 command_runner.go:130] >     {
	I0911 11:27:38.574205  227744 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0911 11:27:38.574213  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574222  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0911 11:27:38.574232  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574239  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574254  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0911 11:27:38.574264  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0911 11:27:38.574267  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574272  227744 command_runner.go:130] >       "size": "65249302",
	I0911 11:27:38.574276  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.574279  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574285  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574289  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574293  227744 command_runner.go:130] >     },
	I0911 11:27:38.574296  227744 command_runner.go:130] >     {
	I0911 11:27:38.574303  227744 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0911 11:27:38.574309  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574317  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0911 11:27:38.574323  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574327  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574334  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0911 11:27:38.574344  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0911 11:27:38.574348  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574354  227744 command_runner.go:130] >       "size": "31470524",
	I0911 11:27:38.574360  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.574364  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574371  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574375  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574379  227744 command_runner.go:130] >     },
	I0911 11:27:38.574383  227744 command_runner.go:130] >     {
	I0911 11:27:38.574388  227744 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0911 11:27:38.574394  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574398  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0911 11:27:38.574404  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574408  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574415  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0911 11:27:38.574425  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0911 11:27:38.574429  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574433  227744 command_runner.go:130] >       "size": "53621675",
	I0911 11:27:38.574437  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.574441  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574445  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574451  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574455  227744 command_runner.go:130] >     },
	I0911 11:27:38.574460  227744 command_runner.go:130] >     {
	I0911 11:27:38.574466  227744 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0911 11:27:38.574472  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574478  227744 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0911 11:27:38.574483  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574487  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574494  227744 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0911 11:27:38.574503  227744 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0911 11:27:38.574540  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574551  227744 command_runner.go:130] >       "size": "295456551",
	I0911 11:27:38.574555  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.574559  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.574563  227744 command_runner.go:130] >       },
	I0911 11:27:38.574567  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574573  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574578  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574581  227744 command_runner.go:130] >     },
	I0911 11:27:38.574585  227744 command_runner.go:130] >     {
	I0911 11:27:38.574591  227744 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0911 11:27:38.574597  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574602  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0911 11:27:38.574608  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574612  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574625  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0911 11:27:38.574635  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0911 11:27:38.574640  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574644  227744 command_runner.go:130] >       "size": "126972880",
	I0911 11:27:38.574648  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.574652  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.574656  227744 command_runner.go:130] >       },
	I0911 11:27:38.574659  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574663  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574667  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574671  227744 command_runner.go:130] >     },
	I0911 11:27:38.574674  227744 command_runner.go:130] >     {
	I0911 11:27:38.574680  227744 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0911 11:27:38.574686  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574692  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0911 11:27:38.574697  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574701  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574708  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0911 11:27:38.574718  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0911 11:27:38.574723  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574727  227744 command_runner.go:130] >       "size": "123163446",
	I0911 11:27:38.574733  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.574737  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.574741  227744 command_runner.go:130] >       },
	I0911 11:27:38.574744  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574751  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574755  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574761  227744 command_runner.go:130] >     },
	I0911 11:27:38.574764  227744 command_runner.go:130] >     {
	I0911 11:27:38.574770  227744 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0911 11:27:38.574776  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574781  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0911 11:27:38.574784  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574788  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574795  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0911 11:27:38.574804  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0911 11:27:38.574807  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574811  227744 command_runner.go:130] >       "size": "74680215",
	I0911 11:27:38.574815  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.574819  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574825  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574828  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574832  227744 command_runner.go:130] >     },
	I0911 11:27:38.574835  227744 command_runner.go:130] >     {
	I0911 11:27:38.574841  227744 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0911 11:27:38.574847  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574852  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0911 11:27:38.574858  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574861  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.574901  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0911 11:27:38.574915  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0911 11:27:38.574921  227744 command_runner.go:130] >       ],
	I0911 11:27:38.574928  227744 command_runner.go:130] >       "size": "61477686",
	I0911 11:27:38.574934  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.574940  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.574949  227744 command_runner.go:130] >       },
	I0911 11:27:38.574955  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.574962  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.574967  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.574973  227744 command_runner.go:130] >     },
	I0911 11:27:38.574976  227744 command_runner.go:130] >     {
	I0911 11:27:38.574985  227744 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0911 11:27:38.574990  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.574995  227744 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0911 11:27:38.574999  227744 command_runner.go:130] >       ],
	I0911 11:27:38.575004  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.575013  227744 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0911 11:27:38.575028  227744 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0911 11:27:38.575037  227744 command_runner.go:130] >       ],
	I0911 11:27:38.575044  227744 command_runner.go:130] >       "size": "750414",
	I0911 11:27:38.575052  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.575060  227744 command_runner.go:130] >         "value": "65535"
	I0911 11:27:38.575068  227744 command_runner.go:130] >       },
	I0911 11:27:38.575072  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.575077  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.575082  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.575087  227744 command_runner.go:130] >     }
	I0911 11:27:38.575091  227744 command_runner.go:130] >   ]
	I0911 11:27:38.575096  227744 command_runner.go:130] > }
	I0911 11:27:38.576447  227744 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:27:38.576465  227744 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:27:38.576507  227744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:27:38.608608  227744 command_runner.go:130] > {
	I0911 11:27:38.608631  227744 command_runner.go:130] >   "images": [
	I0911 11:27:38.608637  227744 command_runner.go:130] >     {
	I0911 11:27:38.608647  227744 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0911 11:27:38.608653  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.608662  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0911 11:27:38.608667  227744 command_runner.go:130] >       ],
	I0911 11:27:38.608674  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.608687  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0911 11:27:38.608704  227744 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0911 11:27:38.608715  227744 command_runner.go:130] >       ],
	I0911 11:27:38.608725  227744 command_runner.go:130] >       "size": "65249302",
	I0911 11:27:38.608738  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.608748  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.608758  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.608768  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.608777  227744 command_runner.go:130] >     },
	I0911 11:27:38.608783  227744 command_runner.go:130] >     {
	I0911 11:27:38.608791  227744 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0911 11:27:38.608799  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.608806  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0911 11:27:38.608811  227744 command_runner.go:130] >       ],
	I0911 11:27:38.608818  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.608832  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0911 11:27:38.608844  227744 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0911 11:27:38.608850  227744 command_runner.go:130] >       ],
	I0911 11:27:38.608863  227744 command_runner.go:130] >       "size": "31470524",
	I0911 11:27:38.608874  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.608882  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.608892  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.608902  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.608911  227744 command_runner.go:130] >     },
	I0911 11:27:38.608918  227744 command_runner.go:130] >     {
	I0911 11:27:38.608933  227744 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0911 11:27:38.608943  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.608956  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0911 11:27:38.608965  227744 command_runner.go:130] >       ],
	I0911 11:27:38.608973  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.608989  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0911 11:27:38.609006  227744 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0911 11:27:38.609015  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609025  227744 command_runner.go:130] >       "size": "53621675",
	I0911 11:27:38.609035  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.609042  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609052  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609063  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609071  227744 command_runner.go:130] >     },
	I0911 11:27:38.609078  227744 command_runner.go:130] >     {
	I0911 11:27:38.609093  227744 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0911 11:27:38.609104  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.609117  227744 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0911 11:27:38.609126  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609134  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.609150  227744 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0911 11:27:38.609165  227744 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0911 11:27:38.609179  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609190  227744 command_runner.go:130] >       "size": "295456551",
	I0911 11:27:38.609197  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.609204  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.609213  227744 command_runner.go:130] >       },
	I0911 11:27:38.609221  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609231  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609249  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609258  227744 command_runner.go:130] >     },
	I0911 11:27:38.609265  227744 command_runner.go:130] >     {
	I0911 11:27:38.609279  227744 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0911 11:27:38.609289  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.609299  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0911 11:27:38.609319  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609329  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.609347  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0911 11:27:38.609364  227744 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0911 11:27:38.609372  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609380  227744 command_runner.go:130] >       "size": "126972880",
	I0911 11:27:38.609390  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.609400  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.609409  227744 command_runner.go:130] >       },
	I0911 11:27:38.609417  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609427  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609436  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609445  227744 command_runner.go:130] >     },
	I0911 11:27:38.609452  227744 command_runner.go:130] >     {
	I0911 11:27:38.609466  227744 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0911 11:27:38.609475  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.609489  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0911 11:27:38.609498  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609507  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.609524  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0911 11:27:38.609541  227744 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0911 11:27:38.609550  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609558  227744 command_runner.go:130] >       "size": "123163446",
	I0911 11:27:38.609570  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.609580  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.609590  227744 command_runner.go:130] >       },
	I0911 11:27:38.609598  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609609  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609620  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609629  227744 command_runner.go:130] >     },
	I0911 11:27:38.609636  227744 command_runner.go:130] >     {
	I0911 11:27:38.609650  227744 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0911 11:27:38.609660  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.609672  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0911 11:27:38.609682  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609691  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.609707  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0911 11:27:38.609723  227744 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0911 11:27:38.609732  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609740  227744 command_runner.go:130] >       "size": "74680215",
	I0911 11:27:38.609748  227744 command_runner.go:130] >       "uid": null,
	I0911 11:27:38.609758  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609769  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609778  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609787  227744 command_runner.go:130] >     },
	I0911 11:27:38.609796  227744 command_runner.go:130] >     {
	I0911 11:27:38.609811  227744 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0911 11:27:38.609821  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.609833  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0911 11:27:38.609843  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609853  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.609882  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0911 11:27:38.609899  227744 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0911 11:27:38.609905  227744 command_runner.go:130] >       ],
	I0911 11:27:38.609913  227744 command_runner.go:130] >       "size": "61477686",
	I0911 11:27:38.609923  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.609932  227744 command_runner.go:130] >         "value": "0"
	I0911 11:27:38.609942  227744 command_runner.go:130] >       },
	I0911 11:27:38.609951  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.609961  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.609969  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.609978  227744 command_runner.go:130] >     },
	I0911 11:27:38.609984  227744 command_runner.go:130] >     {
	I0911 11:27:38.609995  227744 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0911 11:27:38.610006  227744 command_runner.go:130] >       "repoTags": [
	I0911 11:27:38.610018  227744 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0911 11:27:38.610025  227744 command_runner.go:130] >       ],
	I0911 11:27:38.610036  227744 command_runner.go:130] >       "repoDigests": [
	I0911 11:27:38.610050  227744 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0911 11:27:38.610065  227744 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0911 11:27:38.610074  227744 command_runner.go:130] >       ],
	I0911 11:27:38.610083  227744 command_runner.go:130] >       "size": "750414",
	I0911 11:27:38.610108  227744 command_runner.go:130] >       "uid": {
	I0911 11:27:38.610120  227744 command_runner.go:130] >         "value": "65535"
	I0911 11:27:38.610129  227744 command_runner.go:130] >       },
	I0911 11:27:38.610138  227744 command_runner.go:130] >       "username": "",
	I0911 11:27:38.610148  227744 command_runner.go:130] >       "spec": null,
	I0911 11:27:38.610159  227744 command_runner.go:130] >       "pinned": false
	I0911 11:27:38.610167  227744 command_runner.go:130] >     }
	I0911 11:27:38.610175  227744 command_runner.go:130] >   ]
	I0911 11:27:38.610182  227744 command_runner.go:130] > }
	I0911 11:27:38.610312  227744 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:27:38.610325  227744 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:27:38.610400  227744 ssh_runner.go:195] Run: crio config
	I0911 11:27:38.648402  227744 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:27:38.648427  227744 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:27:38.648433  227744 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:27:38.648437  227744 command_runner.go:130] > #
	I0911 11:27:38.648447  227744 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:27:38.648454  227744 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:27:38.648461  227744 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:27:38.648478  227744 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:27:38.648483  227744 command_runner.go:130] > # reload'.
	I0911 11:27:38.648493  227744 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:27:38.648505  227744 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:27:38.648529  227744 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:27:38.648538  227744 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:27:38.648544  227744 command_runner.go:130] > [crio]
	I0911 11:27:38.648558  227744 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:27:38.648567  227744 command_runner.go:130] > # containers images, in this directory.
	I0911 11:27:38.648578  227744 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0911 11:27:38.648589  227744 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:27:38.648602  227744 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0911 11:27:38.648616  227744 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:27:38.648629  227744 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:27:38.648639  227744 command_runner.go:130] > # storage_driver = "vfs"
	I0911 11:27:38.648648  227744 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:27:38.648661  227744 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:27:38.648671  227744 command_runner.go:130] > # storage_option = [
	I0911 11:27:38.648691  227744 command_runner.go:130] > # ]
	I0911 11:27:38.648706  227744 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:27:38.648719  227744 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:27:38.648730  227744 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:27:38.648742  227744 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:27:38.648755  227744 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:27:38.648766  227744 command_runner.go:130] > # always happen on a node reboot
	I0911 11:27:38.648773  227744 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:27:38.648786  227744 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:27:38.648799  227744 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:27:38.648816  227744 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:27:38.648828  227744 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:27:38.648844  227744 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:27:38.648859  227744 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:27:38.648869  227744 command_runner.go:130] > # internal_wipe = true
	I0911 11:27:38.648880  227744 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:27:38.648895  227744 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:27:38.648904  227744 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:27:38.648917  227744 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:27:38.648937  227744 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:27:38.648947  227744 command_runner.go:130] > [crio.api]
	I0911 11:27:38.648956  227744 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:27:38.648969  227744 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:27:38.648978  227744 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:27:38.648990  227744 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:27:38.649001  227744 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:27:38.649013  227744 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:27:38.649023  227744 command_runner.go:130] > # stream_port = "0"
	I0911 11:27:38.649033  227744 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:27:38.649043  227744 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:27:38.649053  227744 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:27:38.649064  227744 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:27:38.649074  227744 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:27:38.649088  227744 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:27:38.649096  227744 command_runner.go:130] > # minutes.
	I0911 11:27:38.649102  227744 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:27:38.649111  227744 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:27:38.649126  227744 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:27:38.649134  227744 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:27:38.649143  227744 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:27:38.649154  227744 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:27:38.649163  227744 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:27:38.649169  227744 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:27:38.649182  227744 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:27:38.649191  227744 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0911 11:27:38.649207  227744 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:27:38.649218  227744 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0911 11:27:38.649274  227744 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:27:38.649286  227744 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:27:38.649292  227744 command_runner.go:130] > [crio.runtime]
	I0911 11:27:38.649301  227744 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:27:38.649316  227744 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:27:38.649322  227744 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:27:38.649330  227744 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:27:38.649336  227744 command_runner.go:130] > # default_ulimits = [
	I0911 11:27:38.649341  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649349  227744 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:27:38.649360  227744 command_runner.go:130] > # no_pivot = false
	I0911 11:27:38.649373  227744 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:27:38.649382  227744 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:27:38.649390  227744 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:27:38.649398  227744 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:27:38.649407  227744 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:27:38.649416  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:27:38.649423  227744 command_runner.go:130] > # conmon = ""
	I0911 11:27:38.649429  227744 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:27:38.649441  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:27:38.649449  227744 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:27:38.649457  227744 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:27:38.649466  227744 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:27:38.649478  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:27:38.649487  227744 command_runner.go:130] > # conmon_env = [
	I0911 11:27:38.649492  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649504  227744 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:27:38.649513  227744 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:27:38.649526  227744 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:27:38.649535  227744 command_runner.go:130] > # default_env = [
	I0911 11:27:38.649543  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649555  227744 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:27:38.649561  227744 command_runner.go:130] > # selinux = false
	I0911 11:27:38.649567  227744 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:27:38.649576  227744 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:27:38.649581  227744 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:27:38.649588  227744 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:27:38.649594  227744 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:27:38.649602  227744 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:27:38.649608  227744 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:27:38.649616  227744 command_runner.go:130] > # which might increase security.
	I0911 11:27:38.649620  227744 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0911 11:27:38.649628  227744 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:27:38.649634  227744 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:27:38.649642  227744 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:27:38.649649  227744 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:27:38.649654  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:27:38.649660  227744 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:27:38.649669  227744 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:27:38.649681  227744 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:27:38.649687  227744 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:27:38.649693  227744 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:27:38.649700  227744 command_runner.go:130] > # irqbalance daemon.
	I0911 11:27:38.649705  227744 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:27:38.649714  227744 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:27:38.649719  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:27:38.649725  227744 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:27:38.649730  227744 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:27:38.649737  227744 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:27:38.649744  227744 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:27:38.649751  227744 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:27:38.649758  227744 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:27:38.649766  227744 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:27:38.649770  227744 command_runner.go:130] > # will be added.
	I0911 11:27:38.649777  227744 command_runner.go:130] > # default_capabilities = [
	I0911 11:27:38.649781  227744 command_runner.go:130] > # 	"CHOWN",
	I0911 11:27:38.649787  227744 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:27:38.649791  227744 command_runner.go:130] > # 	"FSETID",
	I0911 11:27:38.649797  227744 command_runner.go:130] > # 	"FOWNER",
	I0911 11:27:38.649800  227744 command_runner.go:130] > # 	"SETGID",
	I0911 11:27:38.649807  227744 command_runner.go:130] > # 	"SETUID",
	I0911 11:27:38.649810  227744 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:27:38.649817  227744 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:27:38.649821  227744 command_runner.go:130] > # 	"KILL",
	I0911 11:27:38.649826  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649833  227744 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0911 11:27:38.649843  227744 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0911 11:27:38.649850  227744 command_runner.go:130] > # add_inheritable_capabilities = true
	I0911 11:27:38.649856  227744 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:27:38.649864  227744 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:27:38.649867  227744 command_runner.go:130] > # default_sysctls = [
	I0911 11:27:38.649873  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649878  227744 command_runner.go:130] > # List of devices on the host that a
	I0911 11:27:38.649886  227744 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:27:38.649891  227744 command_runner.go:130] > # allowed_devices = [
	I0911 11:27:38.649895  227744 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:27:38.649900  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649905  227744 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:27:38.649950  227744 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:27:38.649958  227744 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:27:38.649964  227744 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:27:38.649971  227744 command_runner.go:130] > # additional_devices = [
	I0911 11:27:38.649977  227744 command_runner.go:130] > # ]
	I0911 11:27:38.649982  227744 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:27:38.649991  227744 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:27:38.650000  227744 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:27:38.650009  227744 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:27:38.650018  227744 command_runner.go:130] > # ]
	I0911 11:27:38.650029  227744 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:27:38.650043  227744 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:27:38.650051  227744 command_runner.go:130] > # Defaults to false.
	I0911 11:27:38.650056  227744 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:27:38.650064  227744 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:27:38.650072  227744 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:27:38.650076  227744 command_runner.go:130] > # hooks_dir = [
	I0911 11:27:38.650104  227744 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:27:38.650115  227744 command_runner.go:130] > # ]
	I0911 11:27:38.650124  227744 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:27:38.650133  227744 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:27:38.650139  227744 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:27:38.650144  227744 command_runner.go:130] > #
	I0911 11:27:38.650151  227744 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:27:38.650161  227744 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:27:38.650168  227744 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:27:38.650172  227744 command_runner.go:130] > #
	I0911 11:27:38.650178  227744 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:27:38.650186  227744 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:27:38.650193  227744 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:27:38.650200  227744 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:27:38.650203  227744 command_runner.go:130] > #
	I0911 11:27:38.650210  227744 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:27:38.650216  227744 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:27:38.650225  227744 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:27:38.650231  227744 command_runner.go:130] > # pids_limit = 0
	I0911 11:27:38.650238  227744 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:27:38.650246  227744 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:27:38.650252  227744 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:27:38.650262  227744 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:27:38.650268  227744 command_runner.go:130] > # log_size_max = -1
	I0911 11:27:38.650276  227744 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:27:38.650285  227744 command_runner.go:130] > # log_to_journald = false
	I0911 11:27:38.650293  227744 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:27:38.650300  227744 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:27:38.650305  227744 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:27:38.650312  227744 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:27:38.650321  227744 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:27:38.650328  227744 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:27:38.650333  227744 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:27:38.650339  227744 command_runner.go:130] > # read_only = false
	I0911 11:27:38.650345  227744 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:27:38.650353  227744 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:27:38.650360  227744 command_runner.go:130] > # live configuration reload.
	I0911 11:27:38.650364  227744 command_runner.go:130] > # log_level = "info"
	I0911 11:27:38.650369  227744 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:27:38.650376  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:27:38.650380  227744 command_runner.go:130] > # log_filter = ""
	I0911 11:27:38.650388  227744 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:27:38.650397  227744 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:27:38.650404  227744 command_runner.go:130] > # separated by comma.
	I0911 11:27:38.650411  227744 command_runner.go:130] > # uid_mappings = ""
	I0911 11:27:38.650418  227744 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:27:38.650426  227744 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:27:38.650432  227744 command_runner.go:130] > # separated by comma.
	I0911 11:27:38.650436  227744 command_runner.go:130] > # gid_mappings = ""
	I0911 11:27:38.650444  227744 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:27:38.650450  227744 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:27:38.650458  227744 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:27:38.650462  227744 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:27:38.650471  227744 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:27:38.650479  227744 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:27:38.650485  227744 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:27:38.650492  227744 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:27:38.650497  227744 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:27:38.650505  227744 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:27:38.650511  227744 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:27:38.650518  227744 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:27:38.650523  227744 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:27:38.650554  227744 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:27:38.650566  227744 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:27:38.650570  227744 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:27:38.650574  227744 command_runner.go:130] > # drop_infra_ctr = true
	I0911 11:27:38.650580  227744 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:27:38.650585  227744 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:27:38.650595  227744 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:27:38.650602  227744 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:27:38.650614  227744 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:27:38.650626  227744 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:27:38.650636  227744 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:27:38.650647  227744 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:27:38.650668  227744 command_runner.go:130] > # pinns_path = ""
	I0911 11:27:38.650678  227744 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:27:38.650685  227744 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:27:38.650690  227744 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:27:38.650696  227744 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:27:38.650702  227744 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:27:38.650712  227744 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:27:38.650723  227744 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:27:38.650731  227744 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:27:38.650740  227744 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:27:38.650747  227744 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:27:38.650752  227744 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:27:38.650757  227744 command_runner.go:130] > # ]
	I0911 11:27:38.650763  227744 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:27:38.650772  227744 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:27:38.650780  227744 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:27:38.650789  227744 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:27:38.650794  227744 command_runner.go:130] > #
	I0911 11:27:38.650799  227744 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:27:38.650805  227744 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:27:38.650809  227744 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:27:38.650816  227744 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:27:38.650821  227744 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:27:38.650827  227744 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:27:38.650831  227744 command_runner.go:130] > # Where:
	I0911 11:27:38.650840  227744 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:27:38.650851  227744 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:27:38.650859  227744 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:27:38.650868  227744 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:27:38.650872  227744 command_runner.go:130] > #   in $PATH.
	I0911 11:27:38.650878  227744 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:27:38.650885  227744 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:27:38.650891  227744 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:27:38.650897  227744 command_runner.go:130] > #   state.
	I0911 11:27:38.650904  227744 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:27:38.650912  227744 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:27:38.650920  227744 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:27:38.650928  227744 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:27:38.650934  227744 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:27:38.650942  227744 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:27:38.650949  227744 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:27:38.650965  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:27:38.650974  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:27:38.650980  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:27:38.650987  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:27:38.650997  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:27:38.651005  227744 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:27:38.651013  227744 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:27:38.651020  227744 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:27:38.651032  227744 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:27:38.651038  227744 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:27:38.651043  227744 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0911 11:27:38.651049  227744 command_runner.go:130] > runtime_type = "oci"
	I0911 11:27:38.651054  227744 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:27:38.651060  227744 command_runner.go:130] > runtime_config_path = ""
	I0911 11:27:38.651064  227744 command_runner.go:130] > monitor_path = ""
	I0911 11:27:38.651071  227744 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:27:38.651075  227744 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:27:38.651122  227744 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:27:38.651132  227744 command_runner.go:130] > # running containers
	I0911 11:27:38.651142  227744 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:27:38.651155  227744 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:27:38.651169  227744 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:27:38.651179  227744 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:27:38.651184  227744 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:27:38.651191  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:27:38.651195  227744 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:27:38.651202  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:27:38.651207  227744 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:27:38.651214  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:27:38.651220  227744 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:27:38.651228  227744 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:27:38.651236  227744 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:27:38.651245  227744 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:27:38.651255  227744 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:27:38.651263  227744 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:27:38.651271  227744 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:27:38.651282  227744 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:27:38.651288  227744 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:27:38.651297  227744 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:27:38.651303  227744 command_runner.go:130] > # Example:
	I0911 11:27:38.651307  227744 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:27:38.651314  227744 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:27:38.651319  227744 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:27:38.651326  227744 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:27:38.651330  227744 command_runner.go:130] > # cpuset = 0
	I0911 11:27:38.651336  227744 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:27:38.651340  227744 command_runner.go:130] > # Where:
	I0911 11:27:38.651347  227744 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:27:38.651356  227744 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:27:38.651364  227744 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:27:38.651372  227744 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:27:38.651380  227744 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:27:38.651388  227744 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:27:38.651394  227744 command_runner.go:130] > # 
	I0911 11:27:38.651400  227744 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:27:38.651405  227744 command_runner.go:130] > #
	I0911 11:27:38.651413  227744 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:27:38.651422  227744 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:27:38.651430  227744 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:27:38.651439  227744 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:27:38.651444  227744 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:27:38.651450  227744 command_runner.go:130] > [crio.image]
	I0911 11:27:38.651456  227744 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:27:38.651462  227744 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:27:38.651468  227744 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:27:38.651477  227744 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:27:38.651481  227744 command_runner.go:130] > # global_auth_file = ""
	I0911 11:27:38.651486  227744 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:27:38.651495  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:27:38.651502  227744 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:27:38.651509  227744 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:27:38.651516  227744 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:27:38.651525  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:27:38.651531  227744 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:27:38.651537  227744 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:27:38.651545  227744 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:27:38.651552  227744 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:27:38.651560  227744 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:27:38.651564  227744 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:27:38.651576  227744 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:27:38.651589  227744 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:27:38.651601  227744 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:27:38.651615  227744 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:27:38.651629  227744 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:27:38.651639  227744 command_runner.go:130] > # signature_policy = ""
	I0911 11:27:38.651691  227744 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:27:38.651708  227744 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:27:38.651715  227744 command_runner.go:130] > # changing them here.
	I0911 11:27:38.651722  227744 command_runner.go:130] > # insecure_registries = [
	I0911 11:27:38.651731  227744 command_runner.go:130] > # ]
	I0911 11:27:38.651745  227744 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:27:38.651760  227744 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:27:38.651769  227744 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:27:38.651776  227744 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:27:38.651783  227744 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:27:38.651789  227744 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:27:38.651795  227744 command_runner.go:130] > # CNI plugins.
	I0911 11:27:38.651799  227744 command_runner.go:130] > [crio.network]
	I0911 11:27:38.651805  227744 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:27:38.651812  227744 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:27:38.651817  227744 command_runner.go:130] > # cni_default_network = ""
	I0911 11:27:38.651825  227744 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:27:38.651832  227744 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:27:38.651845  227744 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:27:38.651853  227744 command_runner.go:130] > # plugin_dirs = [
	I0911 11:27:38.651860  227744 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:27:38.651869  227744 command_runner.go:130] > # ]
	I0911 11:27:38.651879  227744 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:27:38.651893  227744 command_runner.go:130] > [crio.metrics]
	I0911 11:27:38.651905  227744 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:27:38.651915  227744 command_runner.go:130] > # enable_metrics = false
	I0911 11:27:38.651924  227744 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:27:38.651929  227744 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:27:38.651937  227744 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:27:38.651945  227744 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:27:38.651951  227744 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:27:38.651957  227744 command_runner.go:130] > # metrics_collectors = [
	I0911 11:27:38.651961  227744 command_runner.go:130] > # 	"operations",
	I0911 11:27:38.651968  227744 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:27:38.651972  227744 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:27:38.651979  227744 command_runner.go:130] > # 	"operations_errors",
	I0911 11:27:38.651983  227744 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:27:38.651989  227744 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:27:38.651994  227744 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:27:38.652000  227744 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:27:38.652004  227744 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:27:38.652009  227744 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:27:38.652016  227744 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:27:38.652020  227744 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:27:38.652024  227744 command_runner.go:130] > # 	"containers_oom",
	I0911 11:27:38.652030  227744 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:27:38.652034  227744 command_runner.go:130] > # 	"operations_total",
	I0911 11:27:38.652040  227744 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:27:38.652048  227744 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:27:38.652054  227744 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:27:38.652059  227744 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:27:38.652065  227744 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:27:38.652078  227744 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:27:38.652085  227744 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:27:38.652090  227744 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:27:38.652097  227744 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:27:38.652100  227744 command_runner.go:130] > # ]
	I0911 11:27:38.652105  227744 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:27:38.652112  227744 command_runner.go:130] > # metrics_port = 9090
	I0911 11:27:38.652118  227744 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:27:38.652124  227744 command_runner.go:130] > # metrics_socket = ""
	I0911 11:27:38.652129  227744 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:27:38.652137  227744 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:27:38.652146  227744 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:27:38.652152  227744 command_runner.go:130] > # certificate on any modification event.
	I0911 11:27:38.652158  227744 command_runner.go:130] > # metrics_cert = ""
	I0911 11:27:38.652164  227744 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:27:38.652171  227744 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:27:38.652177  227744 command_runner.go:130] > # metrics_key = ""
	I0911 11:27:38.652183  227744 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:27:38.652189  227744 command_runner.go:130] > [crio.tracing]
	I0911 11:27:38.652194  227744 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:27:38.652201  227744 command_runner.go:130] > # enable_tracing = false
	I0911 11:27:38.652206  227744 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:27:38.652213  227744 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:27:38.652218  227744 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:27:38.652224  227744 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:27:38.652230  227744 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:27:38.652234  227744 command_runner.go:130] > [crio.stats]
	I0911 11:27:38.652242  227744 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:27:38.652250  227744 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:27:38.652254  227744 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:27:38.652284  227744 command_runner.go:130] ! time="2023-09-11 11:27:38.645719420Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0911 11:27:38.652297  227744 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:27:38.652368  227744 cni.go:84] Creating CNI manager for ""
	I0911 11:27:38.652376  227744 cni.go:136] 1 nodes found, recommending kindnet
	I0911 11:27:38.652392  227744 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:27:38.652412  227744 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-517978 NodeName:multinode-517978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:27:38.652550  227744 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-517978"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:27:38.652614  227744 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-517978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:27:38.652671  227744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:27:38.660430  227744 command_runner.go:130] > kubeadm
	I0911 11:27:38.660447  227744 command_runner.go:130] > kubectl
	I0911 11:27:38.660451  227744 command_runner.go:130] > kubelet
	I0911 11:27:38.661058  227744 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:27:38.661137  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:27:38.669022  227744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0911 11:27:38.684556  227744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:27:38.700472  227744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0911 11:27:38.716011  227744 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:27:38.719280  227744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:27:38.728885  227744 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978 for IP: 192.168.58.2
	I0911 11:27:38.728925  227744 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:38.729065  227744 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:27:38.729104  227744 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:27:38.729144  227744 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key
	I0911 11:27:38.729157  227744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt with IP's: []
	I0911 11:27:38.869773  227744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt ...
	I0911 11:27:38.869805  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt: {Name:mk5527cf4729a684c32fee98b7cc0454700519d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:38.869974  227744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key ...
	I0911 11:27:38.869984  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key: {Name:mkd3f0e3882b81abc0a6a66519321bf630c2872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:38.870052  227744 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key.cee25041
	I0911 11:27:38.870066  227744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:27:38.983585  227744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt.cee25041 ...
	I0911 11:27:38.983618  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt.cee25041: {Name:mkf9d5ed6a053dacfeb4441e895af3ce36764dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:38.983791  227744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key.cee25041 ...
	I0911 11:27:38.983803  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key.cee25041: {Name:mkbb537aeb59d8ec09d471297d75d84c864ce25b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:38.983871  227744 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt
	I0911 11:27:38.983941  227744 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key
	I0911 11:27:38.983989  227744 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.key
	I0911 11:27:38.984003  227744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.crt with IP's: []
	I0911 11:27:39.321282  227744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.crt ...
	I0911 11:27:39.321318  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.crt: {Name:mka1759294dae13deed10d8a43e3595e790fa452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:39.321504  227744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.key ...
	I0911 11:27:39.321517  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.key: {Name:mkb99b514955b55bceaaf407968160af30449e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:27:39.321590  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 11:27:39.321611  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 11:27:39.321624  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 11:27:39.321642  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 11:27:39.321656  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:27:39.321671  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:27:39.321686  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:27:39.321701  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:27:39.321759  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:27:39.321797  227744 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:27:39.321810  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:27:39.321840  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:27:39.321867  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:27:39.321901  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:27:39.321944  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:27:39.321978  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /usr/share/ca-certificates/1434172.pem
	I0911 11:27:39.321995  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:27:39.322013  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem -> /usr/share/ca-certificates/143417.pem
	I0911 11:27:39.322567  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:27:39.344361  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:27:39.365735  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:27:39.386199  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 11:27:39.406424  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:27:39.426635  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:27:39.446620  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:27:39.467448  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:27:39.488664  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:27:39.509322  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:27:39.529688  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:27:39.550217  227744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:27:39.565401  227744 ssh_runner.go:195] Run: openssl version
	I0911 11:27:39.569901  227744 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0911 11:27:39.570032  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:27:39.578291  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:27:39.581295  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:27:39.581335  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:27:39.581373  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:27:39.587215  227744 command_runner.go:130] > b5213941
	I0911 11:27:39.587444  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:27:39.595661  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:27:39.603611  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:27:39.606514  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:27:39.606569  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:27:39.606610  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:27:39.612471  227744 command_runner.go:130] > 51391683
	I0911 11:27:39.612639  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:27:39.620590  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:27:39.628759  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:27:39.632033  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:27:39.632059  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:27:39.632101  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:27:39.638022  227744 command_runner.go:130] > 3ec20f2e
	I0911 11:27:39.638256  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:27:39.646730  227744 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:27:39.649684  227744 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:27:39.649721  227744 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:27:39.649764  227744 kubeadm.go:404] StartCluster: {Name:multinode-517978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:27:39.649854  227744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:27:39.649904  227744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:27:39.682447  227744 cri.go:89] found id: ""
	I0911 11:27:39.682509  227744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:27:39.690548  227744 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0911 11:27:39.690576  227744 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0911 11:27:39.690582  227744 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0911 11:27:39.690664  227744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:27:39.698458  227744 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:27:39.698509  227744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:27:39.706543  227744 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0911 11:27:39.706562  227744 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0911 11:27:39.706569  227744 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0911 11:27:39.706577  227744 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:27:39.706613  227744 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:27:39.706649  227744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:27:39.750918  227744 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:27:39.750946  227744 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0911 11:27:39.750998  227744 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:27:39.751005  227744 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:27:39.785007  227744 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:27:39.785039  227744 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:27:39.785098  227744 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:27:39.785109  227744 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:27:39.785179  227744 kubeadm.go:322] OS: Linux
	I0911 11:27:39.785202  227744 command_runner.go:130] > OS: Linux
	I0911 11:27:39.785273  227744 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:27:39.785283  227744 command_runner.go:130] > CGROUPS_CPU: enabled
	I0911 11:27:39.785351  227744 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:27:39.785360  227744 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0911 11:27:39.785441  227744 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:27:39.785451  227744 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0911 11:27:39.785512  227744 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:27:39.785522  227744 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0911 11:27:39.785601  227744 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:27:39.785610  227744 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0911 11:27:39.785676  227744 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:27:39.785695  227744 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0911 11:27:39.785758  227744 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:27:39.785767  227744 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0911 11:27:39.785831  227744 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:27:39.785841  227744 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0911 11:27:39.785910  227744 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:27:39.785921  227744 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0911 11:27:39.847135  227744 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:27:39.847176  227744 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:27:39.847327  227744 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:27:39.847340  227744 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:27:39.847415  227744 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:27:39.847422  227744 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:27:40.038314  227744 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:27:40.042305  227744 out.go:204]   - Generating certificates and keys ...
	I0911 11:27:40.038355  227744 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:27:40.042500  227744 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:27:40.042522  227744 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0911 11:27:40.042600  227744 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:27:40.042612  227744 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0911 11:27:40.219087  227744 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:27:40.219113  227744 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:27:40.318409  227744 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:27:40.318438  227744 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:27:40.643075  227744 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:27:40.643088  227744 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0911 11:27:40.845075  227744 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:27:40.845107  227744 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0911 11:27:41.022650  227744 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:27:41.022697  227744 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0911 11:27:41.022844  227744 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-517978] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0911 11:27:41.022859  227744 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-517978] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0911 11:27:41.210303  227744 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:27:41.210330  227744 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0911 11:27:41.210492  227744 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-517978] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0911 11:27:41.210504  227744 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-517978] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0911 11:27:41.371624  227744 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:27:41.371650  227744 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:27:41.508566  227744 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:27:41.508596  227744 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:27:41.596588  227744 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:27:41.596615  227744 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0911 11:27:41.596695  227744 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:27:41.596707  227744 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:27:41.670365  227744 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:27:41.670411  227744 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:27:41.753496  227744 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:27:41.753527  227744 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:27:42.155312  227744 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:27:42.155342  227744 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:27:42.279264  227744 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:27:42.279293  227744 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:27:42.279731  227744 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:27:42.279752  227744 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:27:42.281973  227744 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:27:42.284095  227744 out.go:204]   - Booting up control plane ...
	I0911 11:27:42.281984  227744 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:27:42.284194  227744 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:27:42.284215  227744 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:27:42.284370  227744 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:27:42.284384  227744 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:27:42.284495  227744 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:27:42.284519  227744 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:27:42.291837  227744 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:27:42.291852  227744 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:27:42.292581  227744 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:27:42.292596  227744 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:27:42.292625  227744 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:27:42.292633  227744 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:27:42.368679  227744 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:27:42.368733  227744 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:27:47.370183  227744 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001652 seconds
	I0911 11:27:47.370210  227744 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.001652 seconds
	I0911 11:27:47.370369  227744 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:27:47.370398  227744 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:27:47.382771  227744 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:27:47.382796  227744 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:27:47.903855  227744 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:27:47.903886  227744 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:27:47.904085  227744 kubeadm.go:322] [mark-control-plane] Marking the node multinode-517978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:27:47.904108  227744 command_runner.go:130] > [mark-control-plane] Marking the node multinode-517978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:27:48.414107  227744 kubeadm.go:322] [bootstrap-token] Using token: ogf1u5.5vl8larsjdww74yn
	I0911 11:27:48.415863  227744 out.go:204]   - Configuring RBAC rules ...
	I0911 11:27:48.414149  227744 command_runner.go:130] > [bootstrap-token] Using token: ogf1u5.5vl8larsjdww74yn
	I0911 11:27:48.416004  227744 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:27:48.416022  227744 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:27:48.419949  227744 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:27:48.419974  227744 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:27:48.426309  227744 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:27:48.426312  227744 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:27:48.428976  227744 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:27:48.429000  227744 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:27:48.433088  227744 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:27:48.433109  227744 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:27:48.436033  227744 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:27:48.436054  227744 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:27:48.446456  227744 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:27:48.446481  227744 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:27:48.649071  227744 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:27:48.649093  227744 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0911 11:27:48.862406  227744 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:27:48.862456  227744 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0911 11:27:48.863217  227744 kubeadm.go:322] 
	I0911 11:27:48.863305  227744 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:27:48.863318  227744 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0911 11:27:48.863325  227744 kubeadm.go:322] 
	I0911 11:27:48.863404  227744 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:27:48.863411  227744 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0911 11:27:48.863414  227744 kubeadm.go:322] 
	I0911 11:27:48.863436  227744 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:27:48.863442  227744 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0911 11:27:48.863489  227744 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:27:48.863495  227744 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:27:48.863539  227744 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:27:48.863545  227744 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:27:48.863548  227744 kubeadm.go:322] 
	I0911 11:27:48.863596  227744 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 11:27:48.863603  227744 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0911 11:27:48.863607  227744 kubeadm.go:322] 
	I0911 11:27:48.863648  227744 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:27:48.863654  227744 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:27:48.863657  227744 kubeadm.go:322] 
	I0911 11:27:48.863698  227744 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:27:48.863704  227744 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0911 11:27:48.863778  227744 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:27:48.863791  227744 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:27:48.863860  227744 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:27:48.863871  227744 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:27:48.863880  227744 kubeadm.go:322] 
	I0911 11:27:48.863988  227744 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:27:48.863998  227744 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:27:48.864124  227744 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:27:48.864152  227744 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0911 11:27:48.864177  227744 kubeadm.go:322] 
	I0911 11:27:48.864295  227744 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ogf1u5.5vl8larsjdww74yn \
	I0911 11:27:48.864305  227744 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ogf1u5.5vl8larsjdww74yn \
	I0911 11:27:48.864435  227744 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:27:48.864447  227744 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:27:48.864475  227744 kubeadm.go:322] 	--control-plane 
	I0911 11:27:48.864485  227744 command_runner.go:130] > 	--control-plane 
	I0911 11:27:48.864491  227744 kubeadm.go:322] 
	I0911 11:27:48.864625  227744 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:27:48.864649  227744 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:27:48.864663  227744 kubeadm.go:322] 
	I0911 11:27:48.864786  227744 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ogf1u5.5vl8larsjdww74yn \
	I0911 11:27:48.864802  227744 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ogf1u5.5vl8larsjdww74yn \
	I0911 11:27:48.864939  227744 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:27:48.864948  227744 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:27:48.866985  227744 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:27:48.867011  227744 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:27:48.867144  227744 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:27:48.867163  227744 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:27:48.867186  227744 cni.go:84] Creating CNI manager for ""
	I0911 11:27:48.867200  227744 cni.go:136] 1 nodes found, recommending kindnet
	I0911 11:27:48.869566  227744 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:27:48.871198  227744 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:27:48.875112  227744 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:27:48.875155  227744 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0911 11:27:48.875165  227744 command_runner.go:130] > Device: 34h/52d	Inode: 4171654     Links: 1
	I0911 11:27:48.875174  227744 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:27:48.875181  227744 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0911 11:27:48.875186  227744 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0911 11:27:48.875193  227744 command_runner.go:130] > Change: 2023-09-11 11:09:31.132301758 +0000
	I0911 11:27:48.875200  227744 command_runner.go:130] >  Birth: 2023-09-11 11:09:31.108299847 +0000
	I0911 11:27:48.875251  227744 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:27:48.875262  227744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:27:48.891217  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:27:49.525024  227744 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0911 11:27:49.529853  227744 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0911 11:27:49.536384  227744 command_runner.go:130] > serviceaccount/kindnet created
	I0911 11:27:49.546014  227744 command_runner.go:130] > daemonset.apps/kindnet created
	I0911 11:27:49.550211  227744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:27:49.550296  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:49.550298  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=multinode-517978 minikube.k8s.io/updated_at=2023_09_11T11_27_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:49.557785  227744 command_runner.go:130] > -16
	I0911 11:27:49.557826  227744 ops.go:34] apiserver oom_adj: -16
	I0911 11:27:49.666610  227744 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0911 11:27:49.666703  227744 command_runner.go:130] > node/multinode-517978 labeled
	I0911 11:27:49.666746  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:49.728757  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:49.731328  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:49.800047  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:50.300704  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:50.365973  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:50.800506  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:50.866018  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:51.300220  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:51.365156  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:51.800922  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:51.864551  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:52.300981  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:52.365130  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:52.800601  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:52.861057  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:53.301170  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:53.363333  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:53.800527  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:53.863430  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:54.301034  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:54.364055  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:54.800585  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:54.861570  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:55.301012  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:55.362709  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:55.800790  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:55.862914  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:56.300475  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:56.365572  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:56.800262  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:56.863401  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:57.301035  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:57.364215  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:57.800631  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:57.861812  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:58.300437  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:58.360916  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:58.800298  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:58.863476  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:59.301104  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:59.363059  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:27:59.801276  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:27:59.866190  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:28:00.300568  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:28:00.359806  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:28:00.800835  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:28:00.863901  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:28:01.300470  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:28:01.362866  227744 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:28:01.800231  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:28:01.871578  227744 command_runner.go:130] > NAME      SECRETS   AGE
	I0911 11:28:01.871605  227744 command_runner.go:130] > default   0         0s
	I0911 11:28:01.874167  227744 kubeadm.go:1081] duration metric: took 12.323941358s to wait for elevateKubeSystemPrivileges.
	I0911 11:28:01.874207  227744 kubeadm.go:406] StartCluster complete in 22.224446179s
	I0911 11:28:01.874232  227744 settings.go:142] acquiring lock: {Name:mk01327a907b1ed5b7990abeca4c89109d2bed5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:28:01.874321  227744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:01.875371  227744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/kubeconfig: {Name:mk3da3a5a3a5d35dd9d56a597907266732eec114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:28:01.875632  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:28:01.875763  227744 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:28:01.875841  227744 addons.go:69] Setting storage-provisioner=true in profile "multinode-517978"
	I0911 11:28:01.875865  227744 addons.go:69] Setting default-storageclass=true in profile "multinode-517978"
	I0911 11:28:01.875889  227744 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:28:01.875901  227744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-517978"
	I0911 11:28:01.875868  227744 addons.go:231] Setting addon storage-provisioner=true in "multinode-517978"
	I0911 11:28:01.875994  227744 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:01.876025  227744 host.go:66] Checking if "multinode-517978" exists ...
	I0911 11:28:01.876325  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:28:01.876366  227744 kapi.go:59] client config for multinode-517978: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:28:01.876477  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:28:01.877340  227744 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 11:28:01.877595  227744 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:28:01.877615  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:01.877626  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:01.877640  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:01.888813  227744 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0911 11:28:01.888839  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:01.888851  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:01.888861  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:01.888871  227744 round_trippers.go:580]     Content-Length: 291
	I0911 11:28:01.888880  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:01 GMT
	I0911 11:28:01.888889  227744 round_trippers.go:580]     Audit-Id: e6d725f5-bb3a-457a-b2f1-b53c87126828
	I0911 11:28:01.888899  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:01.888908  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:01.888942  227744 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"27517565-a45b-4d59-9ce6-25ae123bbba6","resourceVersion":"344","creationTimestamp":"2023-09-11T11:27:48Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0911 11:28:01.889421  227744 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"27517565-a45b-4d59-9ce6-25ae123bbba6","resourceVersion":"344","creationTimestamp":"2023-09-11T11:27:48Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0911 11:28:01.889471  227744 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:28:01.889478  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:01.889489  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:01.889501  227744 round_trippers.go:473]     Content-Type: application/json
	I0911 11:28:01.889509  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:01.899824  227744 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0911 11:28:01.899857  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:01.899868  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:01.899877  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:01.899886  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:01.899895  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:01.899904  227744 round_trippers.go:580]     Content-Length: 291
	I0911 11:28:01.899913  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:01 GMT
	I0911 11:28:01.899922  227744 round_trippers.go:580]     Audit-Id: 0bbdd426-ffb2-44df-acdb-039533417ba9
	I0911 11:28:01.899957  227744 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"27517565-a45b-4d59-9ce6-25ae123bbba6","resourceVersion":"348","creationTimestamp":"2023-09-11T11:27:48Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0911 11:28:01.900132  227744 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:28:01.900155  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:01.900171  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:01.900185  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:01.905998  227744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:28:01.905773  227744 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:01.907629  227744 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:28:01.907649  227744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:28:01.907712  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:28:01.907969  227744 kapi.go:59] client config for multinode-517978: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:28:01.908384  227744 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0911 11:28:01.908394  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:01.908405  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:01.908415  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:01.909780  227744 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0911 11:28:01.909800  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:01.909811  227744 round_trippers.go:580]     Audit-Id: 170afbee-6cff-4c9f-aea3-5e643c49cb84
	I0911 11:28:01.909822  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:01.909836  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:01.909849  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:01.909858  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:01.909871  227744 round_trippers.go:580]     Content-Length: 291
	I0911 11:28:01.909881  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:01 GMT
	I0911 11:28:01.910072  227744 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"27517565-a45b-4d59-9ce6-25ae123bbba6","resourceVersion":"348","creationTimestamp":"2023-09-11T11:27:48Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0911 11:28:01.910200  227744 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-517978" context rescaled to 1 replicas
	I0911 11:28:01.910232  227744 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:28:01.911958  227744 out.go:177] * Verifying Kubernetes components...
	I0911 11:28:01.913533  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:28:01.924682  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:28:01.960684  227744 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0911 11:28:01.960714  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:01.960723  227744 round_trippers.go:580]     Audit-Id: bde0a036-327a-46cc-af0b-b6d721decfcf
	I0911 11:28:01.960731  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:01.960739  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:01.960747  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:01.960756  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:01.960763  227744 round_trippers.go:580]     Content-Length: 109
	I0911 11:28:01.960771  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:01 GMT
	I0911 11:28:01.961650  227744 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"350"},"items":[]}
	I0911 11:28:01.961982  227744 addons.go:231] Setting addon default-storageclass=true in "multinode-517978"
	I0911 11:28:01.962023  227744 host.go:66] Checking if "multinode-517978" exists ...
	I0911 11:28:01.962559  227744 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:28:01.989268  227744 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:28:01.989292  227744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:28:01.989354  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:28:02.005527  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:28:02.093151  227744 command_runner.go:130] > apiVersion: v1
	I0911 11:28:02.093171  227744 command_runner.go:130] > data:
	I0911 11:28:02.093176  227744 command_runner.go:130] >   Corefile: |
	I0911 11:28:02.093179  227744 command_runner.go:130] >     .:53 {
	I0911 11:28:02.093183  227744 command_runner.go:130] >         errors
	I0911 11:28:02.093189  227744 command_runner.go:130] >         health {
	I0911 11:28:02.093196  227744 command_runner.go:130] >            lameduck 5s
	I0911 11:28:02.093202  227744 command_runner.go:130] >         }
	I0911 11:28:02.093208  227744 command_runner.go:130] >         ready
	I0911 11:28:02.093218  227744 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0911 11:28:02.093225  227744 command_runner.go:130] >            pods insecure
	I0911 11:28:02.093235  227744 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0911 11:28:02.093254  227744 command_runner.go:130] >            ttl 30
	I0911 11:28:02.093263  227744 command_runner.go:130] >         }
	I0911 11:28:02.093271  227744 command_runner.go:130] >         prometheus :9153
	I0911 11:28:02.093279  227744 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0911 11:28:02.093290  227744 command_runner.go:130] >            max_concurrent 1000
	I0911 11:28:02.093295  227744 command_runner.go:130] >         }
	I0911 11:28:02.093302  227744 command_runner.go:130] >         cache 30
	I0911 11:28:02.093308  227744 command_runner.go:130] >         loop
	I0911 11:28:02.093316  227744 command_runner.go:130] >         reload
	I0911 11:28:02.093322  227744 command_runner.go:130] >         loadbalance
	I0911 11:28:02.093329  227744 command_runner.go:130] >     }
	I0911 11:28:02.093337  227744 command_runner.go:130] > kind: ConfigMap
	I0911 11:28:02.093345  227744 command_runner.go:130] > metadata:
	I0911 11:28:02.093361  227744 command_runner.go:130] >   creationTimestamp: "2023-09-11T11:27:48Z"
	I0911 11:28:02.093373  227744 command_runner.go:130] >   name: coredns
	I0911 11:28:02.093380  227744 command_runner.go:130] >   namespace: kube-system
	I0911 11:28:02.093386  227744 command_runner.go:130] >   resourceVersion: "229"
	I0911 11:28:02.093398  227744 command_runner.go:130] >   uid: d105bd01-039a-4a2b-91dc-050f8ff5497e
	I0911 11:28:02.093610  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 11:28:02.093785  227744 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:02.093990  227744 kapi.go:59] client config for multinode-517978: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:28:02.094248  227744 node_ready.go:35] waiting up to 6m0s for node "multinode-517978" to be "Ready" ...
	I0911 11:28:02.094323  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:02.094330  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:02.094338  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:02.094344  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:02.097936  227744 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:28:02.097958  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:02.097968  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:02.097978  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:02.097987  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:02 GMT
	I0911 11:28:02.098001  227744 round_trippers.go:580]     Audit-Id: 8d7a6bed-c8ab-424b-9d4c-5e05eb3c00f2
	I0911 11:28:02.098009  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:02.098026  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:02.098173  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:02.099009  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:02.099030  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:02.099044  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:02.099054  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:02.101234  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:02.101257  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:02.101268  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:02.101276  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:02.101285  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:02 GMT
	I0911 11:28:02.101298  227744 round_trippers.go:580]     Audit-Id: 2d3b1023-97c2-40c2-8a90-fc0c470be811
	I0911 11:28:02.101307  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:02.101320  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:02.101444  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:02.178375  227744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:28:02.281197  227744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:28:02.602700  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:02.602720  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:02.602728  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:02.602734  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:02.662912  227744 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0911 11:28:02.662992  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:02.663012  227744 round_trippers.go:580]     Audit-Id: ecf1d4da-998a-433d-bf98-d3a85b37c52e
	I0911 11:28:02.663029  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:02.663044  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:02.663060  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:02.663077  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:02.663098  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:02 GMT
	I0911 11:28:02.663231  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:02.679758  227744 command_runner.go:130] > configmap/coredns replaced
	I0911 11:28:02.684467  227744 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0911 11:28:02.684558  227744 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0911 11:28:02.939095  227744 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0911 11:28:02.944830  227744 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0911 11:28:02.951448  227744 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0911 11:28:02.957524  227744 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0911 11:28:02.964857  227744 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0911 11:28:02.972669  227744 command_runner.go:130] > pod/storage-provisioner created
	I0911 11:28:02.980080  227744 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0911 11:28:02.981647  227744 addons.go:502] enable addons completed in 1.105881371s: enabled=[default-storageclass storage-provisioner]
	I0911 11:28:03.102415  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:03.102434  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:03.102442  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:03.102449  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:03.104653  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:03.104670  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:03.104677  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:03 GMT
	I0911 11:28:03.104682  227744 round_trippers.go:580]     Audit-Id: adc387df-4222-4677-945b-483dd4d5d618
	I0911 11:28:03.104688  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:03.104693  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:03.104698  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:03.104703  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:03.104802  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:03.602434  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:03.602454  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:03.602464  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:03.602470  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:03.604813  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:03.604830  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:03.604837  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:03.604843  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:03.604849  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:03.604858  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:03.604865  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:03 GMT
	I0911 11:28:03.604876  227744 round_trippers.go:580]     Audit-Id: 631ead84-b22f-4ed0-b6d1-8e6699ee93a8
	I0911 11:28:03.605017  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:04.102713  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:04.102739  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:04.102747  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:04.102755  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:04.105118  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:04.105153  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:04.105163  227744 round_trippers.go:580]     Audit-Id: bf6206b9-b507-4784-8994-fbe44db8f904
	I0911 11:28:04.105172  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:04.105181  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:04.105190  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:04.105199  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:04.105205  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:04 GMT
	I0911 11:28:04.105311  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:04.105625  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:04.602938  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:04.602959  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:04.602967  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:04.602975  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:04.605200  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:04.605224  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:04.605234  227744 round_trippers.go:580]     Audit-Id: 84e0004c-4e13-4e23-966a-d96d8fb141f4
	I0911 11:28:04.605243  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:04.605251  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:04.605259  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:04.605267  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:04.605275  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:04 GMT
	I0911 11:28:04.605379  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:05.102998  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:05.103018  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:05.103025  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:05.103032  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:05.105319  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:05.105342  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:05.105352  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:05.105361  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:05.105370  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:05 GMT
	I0911 11:28:05.105378  227744 round_trippers.go:580]     Audit-Id: 8ebef0ae-90a8-422c-81d7-e10c431d3049
	I0911 11:28:05.105388  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:05.105397  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:05.105519  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:05.602159  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:05.602182  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:05.602191  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:05.602197  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:05.604478  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:05.604500  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:05.604508  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:05.604515  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:05.604527  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:05.604533  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:05.604539  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:05 GMT
	I0911 11:28:05.604544  227744 round_trippers.go:580]     Audit-Id: 2b8b3117-d2df-42c6-ba2e-414cac626401
	I0911 11:28:05.604635  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:06.102173  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:06.102193  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:06.102205  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:06.102215  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:06.104497  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:06.104517  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:06.104526  227744 round_trippers.go:580]     Audit-Id: 8884aa45-7973-4cc3-ac85-8a3cdddef29b
	I0911 11:28:06.104535  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:06.104543  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:06.104551  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:06.104560  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:06.104574  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:06 GMT
	I0911 11:28:06.104695  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:06.602276  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:06.602298  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:06.602306  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:06.602312  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:06.604490  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:06.604508  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:06.604515  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:06.604520  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:06 GMT
	I0911 11:28:06.604526  227744 round_trippers.go:580]     Audit-Id: 25e8fba0-3a1c-4666-a255-495cd9901175
	I0911 11:28:06.604534  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:06.604542  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:06.604550  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:06.604656  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:06.604959  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:07.102655  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:07.102674  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:07.102682  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:07.102689  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:07.104814  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:07.104835  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:07.104845  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:07.104854  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:07.104862  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:07.104873  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:07 GMT
	I0911 11:28:07.104887  227744 round_trippers.go:580]     Audit-Id: 106c3c37-d66e-4f94-b3d0-72f489ebbde8
	I0911 11:28:07.104900  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:07.105009  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:07.602670  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:07.602690  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:07.602698  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:07.602704  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:07.604871  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:07.604892  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:07.604899  227744 round_trippers.go:580]     Audit-Id: 5610a022-ce9a-4c4a-bfed-87cea2624431
	I0911 11:28:07.604905  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:07.604913  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:07.604918  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:07.604924  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:07.604929  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:07 GMT
	I0911 11:28:07.605053  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:08.102704  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:08.102723  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:08.102731  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:08.102738  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:08.104887  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:08.104904  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:08.104910  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:08.104916  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:08 GMT
	I0911 11:28:08.104921  227744 round_trippers.go:580]     Audit-Id: bb04f004-3588-4433-af47-18dd5ad4d43c
	I0911 11:28:08.104933  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:08.104940  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:08.104959  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:08.105133  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:08.602786  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:08.602805  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:08.602815  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:08.602822  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:08.605014  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:08.605035  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:08.605046  227744 round_trippers.go:580]     Audit-Id: 4c66b991-d792-45c2-b279-83daf2dfe6e2
	I0911 11:28:08.605056  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:08.605066  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:08.605078  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:08.605087  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:08.605097  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:08 GMT
	I0911 11:28:08.605222  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:08.605643  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:09.102803  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:09.102822  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:09.102831  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:09.102838  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:09.105080  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:09.105103  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:09.105112  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:09.105121  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:09 GMT
	I0911 11:28:09.105129  227744 round_trippers.go:580]     Audit-Id: 76e45a0d-7352-4b8e-bbfb-6b7a6bef2a50
	I0911 11:28:09.105138  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:09.105152  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:09.105161  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:09.105342  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:09.602988  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:09.603016  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:09.603025  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:09.603031  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:09.605250  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:09.605268  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:09.605275  227744 round_trippers.go:580]     Audit-Id: d127e119-2155-4d4c-9688-bd5130cbfe0e
	I0911 11:28:09.605281  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:09.605286  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:09.605292  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:09.605297  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:09.605302  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:09 GMT
	I0911 11:28:09.605420  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:10.102034  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:10.102065  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:10.102074  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:10.102080  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:10.104314  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:10.104334  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:10.104343  227744 round_trippers.go:580]     Audit-Id: e4db2133-04fd-40b5-bbea-2f797b6ff4b9
	I0911 11:28:10.104350  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:10.104358  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:10.104374  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:10.104384  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:10.104397  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:10 GMT
	I0911 11:28:10.104594  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:10.602119  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:10.602159  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:10.602168  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:10.602174  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:10.604437  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:10.604459  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:10.604469  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:10.604478  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:10 GMT
	I0911 11:28:10.604487  227744 round_trippers.go:580]     Audit-Id: 2b0b510d-bae1-40ef-9a97-3a270b030933
	I0911 11:28:10.604506  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:10.604514  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:10.604521  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:10.604619  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:11.102959  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:11.102978  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:11.102986  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:11.102992  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:11.105094  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:11.105117  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:11.105126  227744 round_trippers.go:580]     Audit-Id: 4a28d320-d407-4935-a173-f8487d45ad5e
	I0911 11:28:11.105136  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:11.105149  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:11.105161  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:11.105170  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:11.105182  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:11 GMT
	I0911 11:28:11.105309  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:11.105643  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:11.602915  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:11.602935  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:11.602943  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:11.602949  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:11.605257  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:11.605276  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:11.605282  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:11.605288  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:11 GMT
	I0911 11:28:11.605294  227744 round_trippers.go:580]     Audit-Id: 6cafb4ed-6d42-445d-9826-65323508521c
	I0911 11:28:11.605301  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:11.605313  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:11.605321  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:11.605462  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:12.102335  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:12.102356  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:12.102364  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:12.102371  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:12.104539  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:12.104565  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:12.104574  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:12.104583  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:12 GMT
	I0911 11:28:12.104592  227744 round_trippers.go:580]     Audit-Id: 7880ecbc-feba-4b6f-a1b1-ceb0bda68e45
	I0911 11:28:12.104601  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:12.104610  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:12.104619  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:12.104766  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:12.602456  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:12.602484  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:12.602498  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:12.602506  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:12.604823  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:12.604846  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:12.604854  227744 round_trippers.go:580]     Audit-Id: 9af759a0-ea9b-40bc-9cec-7f5b22d1e35a
	I0911 11:28:12.604860  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:12.604869  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:12.604877  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:12.604889  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:12.604898  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:12 GMT
	I0911 11:28:12.604995  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:13.102648  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:13.102669  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:13.102677  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:13.102684  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:13.104830  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:13.104849  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:13.104855  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:13 GMT
	I0911 11:28:13.104861  227744 round_trippers.go:580]     Audit-Id: ceebf387-92fb-46c7-afa7-9e2941c6f083
	I0911 11:28:13.104869  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:13.104879  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:13.104887  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:13.104902  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:13.105006  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:13.602647  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:13.602668  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:13.602676  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:13.602682  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:13.604835  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:13.604849  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:13.604856  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:13.604862  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:13 GMT
	I0911 11:28:13.604871  227744 round_trippers.go:580]     Audit-Id: 7af6c9c4-898d-4970-abf1-c7dc1c127fe5
	I0911 11:28:13.604880  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:13.604892  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:13.604901  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:13.605006  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:13.605313  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:14.102683  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:14.102703  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:14.102714  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:14.102722  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:14.105011  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:14.105040  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:14.105048  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:14.105055  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:14 GMT
	I0911 11:28:14.105060  227744 round_trippers.go:580]     Audit-Id: f02a9c4c-7bea-4774-adb5-72ba600a0ba6
	I0911 11:28:14.105065  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:14.105071  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:14.105076  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:14.105183  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:14.602850  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:14.602876  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:14.602889  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:14.602899  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:14.604983  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:14.605007  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:14.605018  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:14.605027  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:14.605036  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:14 GMT
	I0911 11:28:14.605046  227744 round_trippers.go:580]     Audit-Id: 3cb0b638-7b67-4a23-b007-b42db6dd4eb0
	I0911 11:28:14.605056  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:14.605070  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:14.605159  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:15.102568  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:15.102592  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:15.102600  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:15.102606  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:15.104931  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:15.104956  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:15.104965  227744 round_trippers.go:580]     Audit-Id: 84fb7a94-ae24-41e6-b507-a75fda492bad
	I0911 11:28:15.104973  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:15.104981  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:15.104990  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:15.105000  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:15.105007  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:15 GMT
	I0911 11:28:15.105192  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:15.602741  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:15.602763  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:15.602771  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:15.602777  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:15.604952  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:15.604979  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:15.604991  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:15.605000  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:15.605010  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:15 GMT
	I0911 11:28:15.605021  227744 round_trippers.go:580]     Audit-Id: 024ca325-e01f-4ee9-8e01-97a59eedefb3
	I0911 11:28:15.605028  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:15.605037  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:15.605131  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:15.605445  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:16.102753  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:16.102772  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:16.102780  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:16.102787  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:16.104981  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:16.105005  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:16.105014  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:16 GMT
	I0911 11:28:16.105024  227744 round_trippers.go:580]     Audit-Id: 07c1713a-e03a-49ad-8e20-26654280d393
	I0911 11:28:16.105033  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:16.105042  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:16.105056  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:16.105065  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:16.105184  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:16.602841  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:16.602861  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:16.602869  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:16.602876  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:16.605091  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:16.605115  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:16.605125  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:16.605133  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:16.605141  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:16.605149  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:16.605158  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:16 GMT
	I0911 11:28:16.605168  227744 round_trippers.go:580]     Audit-Id: ef8b5408-9ca0-4d1a-8e93-f30b79c5edd3
	I0911 11:28:16.605298  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:17.102128  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:17.102149  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:17.102158  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:17.102164  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:17.104269  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:17.104291  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:17.104302  227744 round_trippers.go:580]     Audit-Id: 17e97895-29dd-4797-acca-9d497ce3baa1
	I0911 11:28:17.104311  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:17.104323  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:17.104335  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:17.104347  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:17.104354  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:17 GMT
	I0911 11:28:17.104475  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:17.602031  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:17.602053  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:17.602063  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:17.602071  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:17.604401  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:17.604427  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:17.604433  227744 round_trippers.go:580]     Audit-Id: e0975147-ea7f-40b8-ab80-093e7ebe9856
	I0911 11:28:17.604439  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:17.604444  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:17.604450  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:17.604455  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:17.604460  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:17 GMT
	I0911 11:28:17.604646  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:18.102918  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:18.102938  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:18.102946  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:18.102954  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:18.105086  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:18.105111  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:18.105122  227744 round_trippers.go:580]     Audit-Id: 4b548521-89e2-491e-a88b-26a24d8df550
	I0911 11:28:18.105132  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:18.105141  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:18.105150  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:18.105160  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:18.105174  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:18 GMT
	I0911 11:28:18.105286  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:18.105638  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:18.602646  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:18.602665  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:18.602673  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:18.602679  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:18.604789  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:18.604808  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:18.604815  227744 round_trippers.go:580]     Audit-Id: 4fe54b7d-b1d8-4b95-82f8-204d160ddf84
	I0911 11:28:18.604823  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:18.604831  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:18.604840  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:18.604849  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:18.604858  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:18 GMT
	I0911 11:28:18.604963  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:19.102620  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:19.102641  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:19.102649  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:19.102655  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:19.104813  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:19.104835  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:19.104847  227744 round_trippers.go:580]     Audit-Id: 8f3c6ccd-b326-4ee0-b15f-31c84ad1b0a8
	I0911 11:28:19.104857  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:19.104865  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:19.104875  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:19.104890  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:19.104906  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:19 GMT
	I0911 11:28:19.105037  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:19.602689  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:19.602714  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:19.602726  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:19.602736  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:19.604874  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:19.604892  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:19.604902  227744 round_trippers.go:580]     Audit-Id: c912101f-4194-4844-9c7a-ca77321eaedf
	I0911 11:28:19.604911  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:19.604920  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:19.604929  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:19.604938  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:19.604946  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:19 GMT
	I0911 11:28:19.605039  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:20.102684  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:20.102724  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:20.102737  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:20.102747  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:20.105145  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:20.105172  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:20.105183  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:20 GMT
	I0911 11:28:20.105192  227744 round_trippers.go:580]     Audit-Id: 0b750120-e0b6-4978-82d6-0dbf412b8819
	I0911 11:28:20.105200  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:20.105208  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:20.105217  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:20.105226  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:20.105361  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:20.105718  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:20.602989  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:20.603008  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:20.603017  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:20.603023  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:20.605184  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:20.605214  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:20.605223  227744 round_trippers.go:580]     Audit-Id: 0795ea6f-d279-4ecd-a1bf-0810293b53c1
	I0911 11:28:20.605231  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:20.605239  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:20.605246  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:20.605254  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:20.605264  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:20 GMT
	I0911 11:28:20.605432  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:21.102951  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:21.102976  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:21.102988  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:21.102997  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:21.105359  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:21.105389  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:21.105401  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:21.105410  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:21.105419  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:21 GMT
	I0911 11:28:21.105428  227744 round_trippers.go:580]     Audit-Id: 055a7d0e-d90e-42c7-bfe3-6aa397a1246d
	I0911 11:28:21.105438  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:21.105455  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:21.105600  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:21.602153  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:21.602176  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:21.602184  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:21.602191  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:21.604565  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:21.604589  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:21.604599  227744 round_trippers.go:580]     Audit-Id: e580000f-9da4-4be4-8e47-54ec7b437c89
	I0911 11:28:21.604608  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:21.604616  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:21.604625  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:21.604635  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:21.604644  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:21 GMT
	I0911 11:28:21.604777  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:22.102826  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:22.102848  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:22.102857  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:22.102863  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:22.105081  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:22.105112  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:22.105121  227744 round_trippers.go:580]     Audit-Id: 0ae93ece-aa46-4474-8ac8-a06361ec6008
	I0911 11:28:22.105130  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:22.105138  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:22.105146  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:22.105155  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:22.105163  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:22 GMT
	I0911 11:28:22.105341  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:22.602986  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:22.603009  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:22.603017  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:22.603024  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:22.605273  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:22.605297  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:22.605307  227744 round_trippers.go:580]     Audit-Id: 33160d28-8ac8-42c0-9fd4-7cc934aa5614
	I0911 11:28:22.605316  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:22.605338  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:22.605347  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:22.605357  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:22.605369  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:22 GMT
	I0911 11:28:22.605480  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:22.605817  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:23.102025  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:23.102045  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:23.102053  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:23.102065  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:23.104418  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:23.104439  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:23.104449  227744 round_trippers.go:580]     Audit-Id: 038dd1a9-de04-4cc2-8a7c-1998c842feb3
	I0911 11:28:23.104458  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:23.104466  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:23.104473  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:23.104484  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:23.104492  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:23 GMT
	I0911 11:28:23.104653  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:23.602247  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:23.602266  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:23.602274  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:23.602295  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:23.604546  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:23.604565  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:23.604572  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:23.604578  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:23.604604  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:23.604610  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:23.604615  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:23 GMT
	I0911 11:28:23.604620  227744 round_trippers.go:580]     Audit-Id: a9e0895c-3193-4200-91f6-c148b32e3cc0
	I0911 11:28:23.604809  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:24.102304  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:24.102326  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:24.102339  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:24.102350  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:24.104513  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:24.104535  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:24.104544  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:24.104553  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:24 GMT
	I0911 11:28:24.104560  227744 round_trippers.go:580]     Audit-Id: 92f67aeb-1bd7-42b0-a15c-6d121764e44f
	I0911 11:28:24.104567  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:24.104575  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:24.104586  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:24.104736  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:24.602258  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:24.602281  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:24.602289  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:24.602295  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:24.604491  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:24.604512  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:24.604520  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:24.604528  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:24.604537  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:24 GMT
	I0911 11:28:24.604546  227744 round_trippers.go:580]     Audit-Id: 3b884531-59c6-4e5c-aa45-2c42883eab24
	I0911 11:28:24.604559  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:24.604572  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:24.604729  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:25.102265  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:25.102287  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:25.102295  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:25.102301  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:25.104533  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:25.104551  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:25.104557  227744 round_trippers.go:580]     Audit-Id: a9d2ace7-3f8a-41a6-a062-707f983ee678
	I0911 11:28:25.104563  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:25.104569  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:25.104575  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:25.104584  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:25.104614  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:25 GMT
	I0911 11:28:25.104738  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:25.105084  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:25.602287  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:25.602309  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:25.602318  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:25.602324  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:25.604707  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:25.604726  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:25.604734  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:25.604740  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:25.604746  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:25 GMT
	I0911 11:28:25.604752  227744 round_trippers.go:580]     Audit-Id: bf675af2-8c71-42e4-989f-545572910673
	I0911 11:28:25.604760  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:25.604768  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:25.604868  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:26.102635  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:26.102655  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:26.102664  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:26.102670  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:26.104833  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:26.104864  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:26.104874  227744 round_trippers.go:580]     Audit-Id: 7df146bf-c1a7-4ef1-9494-f943709660a8
	I0911 11:28:26.104880  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:26.104885  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:26.104891  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:26.104896  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:26.104902  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:26 GMT
	I0911 11:28:26.105029  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:26.602403  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:26.602422  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:26.602430  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:26.602436  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:26.604584  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:26.604622  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:26.604631  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:26 GMT
	I0911 11:28:26.604640  227744 round_trippers.go:580]     Audit-Id: f7576f83-c0e5-4aaa-9048-f82eef5d1691
	I0911 11:28:26.604649  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:26.604658  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:26.604671  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:26.604688  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:26.604791  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:27.102908  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:27.102933  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:27.102941  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:27.102947  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:27.105085  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:27.105110  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:27.105123  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:27 GMT
	I0911 11:28:27.105132  227744 round_trippers.go:580]     Audit-Id: d7909122-da87-47e2-a580-941bef9b69af
	I0911 11:28:27.105141  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:27.105149  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:27.105156  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:27.105161  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:27.105264  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:27.105568  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:27.602926  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:27.602952  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:27.602965  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:27.602974  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:27.605219  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:27.605242  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:27.605250  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:27.605258  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:27.605266  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:27.605275  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:27 GMT
	I0911 11:28:27.605283  227744 round_trippers.go:580]     Audit-Id: b93b20f9-f30e-4447-8109-c8b1a19849fe
	I0911 11:28:27.605291  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:27.605395  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:28.101970  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:28.101990  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:28.101998  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:28.102005  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:28.104225  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:28.104253  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:28.104264  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:28.104272  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:28.104281  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:28 GMT
	I0911 11:28:28.104291  227744 round_trippers.go:580]     Audit-Id: 6aa49a5e-e2a7-4446-be58-71dd3cb6d569
	I0911 11:28:28.104299  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:28.104309  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:28.104456  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:28.602003  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:28.602022  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:28.602030  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:28.602036  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:28.604352  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:28.604370  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:28.604376  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:28.604384  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:28.604393  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:28 GMT
	I0911 11:28:28.604403  227744 round_trippers.go:580]     Audit-Id: a38f49e9-a5f5-4f7f-bc11-41a8f2f0078b
	I0911 11:28:28.604412  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:28.604423  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:28.604532  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:29.102748  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:29.102774  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:29.102786  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:29.102796  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:29.104654  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:29.104671  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:29.104678  227744 round_trippers.go:580]     Audit-Id: 11fd39c2-776d-4365-8a0e-cb4f05412dfd
	I0911 11:28:29.104684  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:29.104690  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:29.104698  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:29.104707  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:29.104716  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:29 GMT
	I0911 11:28:29.104868  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:29.602277  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:29.602301  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:29.602309  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:29.602315  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:29.604634  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:29.604658  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:29.604668  227744 round_trippers.go:580]     Audit-Id: 86c97a30-19ef-4940-92f6-c1034c209890
	I0911 11:28:29.604677  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:29.604686  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:29.604695  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:29.604704  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:29.604713  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:29 GMT
	I0911 11:28:29.604826  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:29.605151  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:30.102248  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:30.102267  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:30.102279  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:30.102285  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:30.104508  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:30.104526  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:30.104533  227744 round_trippers.go:580]     Audit-Id: 076ae90b-cfd0-4933-a25a-499dfd61ab7c
	I0911 11:28:30.104539  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:30.104544  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:30.104550  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:30.104555  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:30.104561  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:30 GMT
	I0911 11:28:30.104700  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:30.602258  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:30.602282  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:30.602293  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:30.602301  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:30.604442  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:30.604460  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:30.604467  227744 round_trippers.go:580]     Audit-Id: 3fb4f4bd-8b40-4e50-b40e-6c228ec636ca
	I0911 11:28:30.604473  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:30.604478  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:30.604484  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:30.604489  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:30.604495  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:30 GMT
	I0911 11:28:30.604768  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:31.102399  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:31.102424  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:31.102437  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:31.102446  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:31.104942  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:31.104966  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:31.104975  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:31.104984  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:31.104992  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:31.105000  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:31.105009  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:31 GMT
	I0911 11:28:31.105017  227744 round_trippers.go:580]     Audit-Id: 26fbb146-3306-4d92-b8ac-bac4a869a08a
	I0911 11:28:31.105160  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:31.602810  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:31.602831  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:31.602839  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:31.602845  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:31.605042  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:31.605065  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:31.605074  227744 round_trippers.go:580]     Audit-Id: 1e2680fa-a69f-4ed8-87ea-ba0cc4ef2553
	I0911 11:28:31.605087  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:31.605096  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:31.605104  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:31.605115  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:31.605125  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:31 GMT
	I0911 11:28:31.605228  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:31.605619  227744 node_ready.go:58] node "multinode-517978" has status "Ready":"False"
	I0911 11:28:32.102313  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:32.102336  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:32.102349  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:32.102359  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:32.104721  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:32.104745  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:32.104753  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:32 GMT
	I0911 11:28:32.104759  227744 round_trippers.go:580]     Audit-Id: 143c3da7-f4b4-46a1-8492-347f3643ba44
	I0911 11:28:32.104764  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:32.104771  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:32.104781  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:32.104790  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:32.104964  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:32.602344  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:32.602369  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:32.602377  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:32.602385  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:32.604775  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:32.604798  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:32.604808  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:32 GMT
	I0911 11:28:32.604818  227744 round_trippers.go:580]     Audit-Id: e1039889-8c30-4ee9-8066-f5692bd2d211
	I0911 11:28:32.604827  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:32.604836  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:32.604843  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:32.604849  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:32.604968  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"317","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0911 11:28:33.102256  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:33.102276  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.102284  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.102290  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.104507  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:33.104526  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.104533  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.104538  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.104544  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.104549  227744 round_trippers.go:580]     Audit-Id: 158615d8-60d7-4b36-9db3-5f1095519a39
	I0911 11:28:33.104556  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.104561  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.104714  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:33.105006  227744 node_ready.go:49] node "multinode-517978" has status "Ready":"True"
	I0911 11:28:33.105020  227744 node_ready.go:38] duration metric: took 31.01075398s waiting for node "multinode-517978" to be "Ready" ...
	I0911 11:28:33.105027  227744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:28:33.105084  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:28:33.105093  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.105099  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.105105  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.107959  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:33.107977  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.107984  227744 round_trippers.go:580]     Audit-Id: ee8714a6-6c82-4ff3-9942-2bbfd33128e4
	I0911 11:28:33.107989  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.107994  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.108000  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.108005  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.108010  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.108426  227744 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"400","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0911 11:28:33.111462  227744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:33.111528  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lmlsc
	I0911 11:28:33.111536  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.111543  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.111551  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.113320  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:33.113335  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.113342  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.113347  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.113353  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.113359  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.113364  227744 round_trippers.go:580]     Audit-Id: 3251ef1b-7226-4596-a1d6-2b9884ad69c4
	I0911 11:28:33.113370  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.113530  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"400","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0911 11:28:33.113915  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:33.113926  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.113933  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.113939  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.115545  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:33.115567  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.115587  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.115599  227744 round_trippers.go:580]     Audit-Id: ed40d8af-d4e6-4db0-8825-aa25cfcfd445
	I0911 11:28:33.115612  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.115620  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.115632  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.115638  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.115727  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:33.116021  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lmlsc
	I0911 11:28:33.116031  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.116038  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.116045  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.117727  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:33.117746  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.117755  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.117764  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.117773  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.117784  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.117792  227744 round_trippers.go:580]     Audit-Id: c10e8494-2f6b-4cf2-bb71-43e08567850f
	I0911 11:28:33.117803  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.117922  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"400","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0911 11:28:33.118329  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:33.118345  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.118352  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.118358  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.119961  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:33.119974  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.119980  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.119986  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.119991  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.119996  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.120001  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.120007  227744 round_trippers.go:580]     Audit-Id: 82169e64-bc59-4f6f-8c81-df508c9f8a30
	I0911 11:28:33.120538  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:33.621344  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lmlsc
	I0911 11:28:33.621367  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.621375  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.621382  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.623911  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:33.623927  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.623934  227744 round_trippers.go:580]     Audit-Id: d0645f3b-b198-47af-b8a5-9e30f89cfb02
	I0911 11:28:33.623940  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.623952  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.623961  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.623973  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.623982  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.624105  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"400","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0911 11:28:33.624540  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:33.624551  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:33.624558  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:33.624564  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:33.626452  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:33.626470  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:33.626480  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:33.626488  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:33.626496  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:33.626505  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:33 GMT
	I0911 11:28:33.626514  227744 round_trippers.go:580]     Audit-Id: 944c18a5-0248-4e83-ab6c-afa9e2ab492d
	I0911 11:28:33.626527  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:33.626685  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.121266  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lmlsc
	I0911 11:28:34.121287  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.121295  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.121301  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.123798  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:34.123816  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.123823  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.123829  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.123834  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.123839  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.123845  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.123851  227744 round_trippers.go:580]     Audit-Id: 0fc4e3a5-157e-40e3-9a59-718e63ec0619
	I0911 11:28:34.123942  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"413","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0911 11:28:34.124378  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.124390  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.124396  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.124404  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.126298  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.126321  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.126331  227744 round_trippers.go:580]     Audit-Id: a0ead9d1-ea5d-488c-bef0-67ec8d41108a
	I0911 11:28:34.126340  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.126348  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.126357  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.126370  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.126379  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.126512  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.126830  227744 pod_ready.go:92] pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:34.126846  227744 pod_ready.go:81] duration metric: took 1.015362125s waiting for pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.126855  227744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.126900  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517978
	I0911 11:28:34.126907  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.126914  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.126920  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.128730  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.128752  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.128762  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.128769  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.128775  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.128784  227744 round_trippers.go:580]     Audit-Id: 15953491-ee73-48ed-b0ba-a2ce7997a1d9
	I0911 11:28:34.128793  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.128802  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.128883  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517978","namespace":"kube-system","uid":"e8ee6b0b-aa4d-4315-8ce1-13e67c030138","resourceVersion":"366","creationTimestamp":"2023-09-11T11:27:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0372ec0a10a9e8ac933ccf1ab6d3e37f","kubernetes.io/config.mirror":"0372ec0a10a9e8ac933ccf1ab6d3e37f","kubernetes.io/config.seen":"2023-09-11T11:27:48.688259211Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0911 11:28:34.129318  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.129337  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.129344  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.129353  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.131010  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.131027  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.131036  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.131044  227744 round_trippers.go:580]     Audit-Id: dd8d1d77-8b62-418d-8382-48b2c26f1faa
	I0911 11:28:34.131052  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.131062  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.131072  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.131088  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.131191  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.131482  227744 pod_ready.go:92] pod "etcd-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:34.131497  227744 pod_ready.go:81] duration metric: took 4.633471ms waiting for pod "etcd-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.131512  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.131561  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517978
	I0911 11:28:34.131571  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.131581  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.131591  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.133233  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.133254  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.133264  227744 round_trippers.go:580]     Audit-Id: 66de9a04-1603-4697-84d7-b1f5b866f6fc
	I0911 11:28:34.133273  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.133286  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.133298  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.133304  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.133310  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.133399  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517978","namespace":"kube-system","uid":"9dc7326e-a6f6-4477-9175-5db6d08e3c2d","resourceVersion":"383","creationTimestamp":"2023-09-11T11:27:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ad1dd79f381ff90e532fcfdde7e87da6","kubernetes.io/config.mirror":"ad1dd79f381ff90e532fcfdde7e87da6","kubernetes.io/config.seen":"2023-09-11T11:27:42.825396082Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0911 11:28:34.133815  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.133828  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.133835  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.133841  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.135530  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.135552  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.135563  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.135572  227744 round_trippers.go:580]     Audit-Id: cbd0131d-98c0-4a16-b005-03fdde960445
	I0911 11:28:34.135581  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.135593  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.135604  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.135612  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.135719  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.135997  227744 pod_ready.go:92] pod "kube-apiserver-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:34.136009  227744 pod_ready.go:81] duration metric: took 4.490707ms waiting for pod "kube-apiserver-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.136016  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.136057  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517978
	I0911 11:28:34.136064  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.136071  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.136077  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.137850  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:34.137873  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.137883  227744 round_trippers.go:580]     Audit-Id: dc61a84c-d662-4397-a777-8c711b09c609
	I0911 11:28:34.137892  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.137908  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.137917  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.137927  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.137941  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.138124  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517978","namespace":"kube-system","uid":"0ed00710-145d-4aad-91c2-df770397db59","resourceVersion":"384","creationTimestamp":"2023-09-11T11:27:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d8452f232ba425055b35eb6d6a7e4f2","kubernetes.io/config.mirror":"6d8452f232ba425055b35eb6d6a7e4f2","kubernetes.io/config.seen":"2023-09-11T11:27:48.688264911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0911 11:28:34.302886  227744 request.go:629] Waited for 164.344924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.302959  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.302964  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.302971  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.302978  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.305197  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:34.305216  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.305226  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.305234  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.305242  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.305249  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.305257  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.305265  227744 round_trippers.go:580]     Audit-Id: d2f7650d-9889-433e-a19f-ad44253639ed
	I0911 11:28:34.305392  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.305722  227744 pod_ready.go:92] pod "kube-controller-manager-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:34.305744  227744 pod_ready.go:81] duration metric: took 169.714416ms waiting for pod "kube-controller-manager-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.305760  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8g9f" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.503211  227744 request.go:629] Waited for 197.36618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8g9f
	I0911 11:28:34.503268  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8g9f
	I0911 11:28:34.503273  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.503281  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.503287  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.505838  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:34.505865  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.505877  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.505887  227744 round_trippers.go:580]     Audit-Id: c357cd56-9fd7-465a-b1f2-9f4500ce9281
	I0911 11:28:34.505897  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.505906  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.505915  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.505925  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.506059  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s8g9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"68f14c0f-00e4-4014-9613-36142d843e61","resourceVersion":"371","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d613dcb2-6db5-48c2-9ef6-def50c5b18eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d613dcb2-6db5-48c2-9ef6-def50c5b18eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0911 11:28:34.703298  227744 request.go:629] Waited for 196.773242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.703367  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:34.703372  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.703380  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.703386  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.705587  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:34.705609  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.705616  227744 round_trippers.go:580]     Audit-Id: 6920a98e-99d6-45ff-ab69-a282e7a08e6e
	I0911 11:28:34.705621  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.705627  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.705634  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.705648  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.705657  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.705774  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:34.706175  227744 pod_ready.go:92] pod "kube-proxy-s8g9f" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:34.706193  227744 pod_ready.go:81] duration metric: took 400.4249ms waiting for pod "kube-proxy-s8g9f" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.706206  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:34.902659  227744 request.go:629] Waited for 196.358628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517978
	I0911 11:28:34.902721  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517978
	I0911 11:28:34.902725  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:34.902733  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:34.902749  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:34.905153  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:34.905172  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:34.905179  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:34.905185  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:34.905196  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:34.905204  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:34.905213  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:34 GMT
	I0911 11:28:34.905229  227744 round_trippers.go:580]     Audit-Id: 358ed2f4-a072-4a62-a725-8d23065d0a3e
	I0911 11:28:34.905334  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517978","namespace":"kube-system","uid":"d30acde8-4c9a-4857-b218-979934d9d41d","resourceVersion":"365","creationTimestamp":"2023-09-11T11:27:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b68774f5eef9a19c580916204e8da67e","kubernetes.io/config.mirror":"b68774f5eef9a19c580916204e8da67e","kubernetes.io/config.seen":"2023-09-11T11:27:42.825392745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0911 11:28:35.103101  227744 request.go:629] Waited for 197.372548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:35.103176  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:28:35.103181  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.103188  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.103196  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.105392  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:35.105417  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.105427  227744 round_trippers.go:580]     Audit-Id: bd28d570-13a5-4d0a-b821-e82c33802323
	I0911 11:28:35.105437  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.105446  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.105456  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.105462  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.105472  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.105553  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:28:35.105874  227744 pod_ready.go:92] pod "kube-scheduler-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:28:35.105890  227744 pod_ready.go:81] duration metric: took 399.6745ms waiting for pod "kube-scheduler-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:28:35.105907  227744 pod_ready.go:38] duration metric: took 2.000864675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:28:35.105929  227744 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:28:35.105983  227744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:28:35.115914  227744 command_runner.go:130] > 1443
	I0911 11:28:35.116652  227744 api_server.go:72] duration metric: took 33.206387075s to wait for apiserver process to appear ...
	I0911 11:28:35.116673  227744 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:28:35.116693  227744 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0911 11:28:35.120885  227744 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0911 11:28:35.120957  227744 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0911 11:28:35.120967  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.120979  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.120998  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.121899  227744 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0911 11:28:35.121918  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.121927  227744 round_trippers.go:580]     Audit-Id: ca4eef44-5c1e-49e6-b8da-0738d4cdeb72
	I0911 11:28:35.121936  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.121943  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.121949  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.121955  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.121965  227744 round_trippers.go:580]     Content-Length: 263
	I0911 11:28:35.121970  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.121993  227744 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0911 11:28:35.122067  227744 api_server.go:141] control plane version: v1.28.1
	I0911 11:28:35.122081  227744 api_server.go:131] duration metric: took 5.401452ms to wait for apiserver health ...
	I0911 11:28:35.122104  227744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:28:35.302352  227744 request.go:629] Waited for 180.175771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:28:35.302408  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:28:35.302413  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.302420  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.302427  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.305869  227744 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:28:35.305890  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.305899  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.305905  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.305911  227744 round_trippers.go:580]     Audit-Id: dca71142-957f-4b7c-92bc-b89088489970
	I0911 11:28:35.305920  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.305926  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.305944  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.306352  227744 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"413","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0911 11:28:35.308120  227744 system_pods.go:59] 8 kube-system pods found
	I0911 11:28:35.308140  227744 system_pods.go:61] "coredns-5dd5756b68-lmlsc" [b64f2269-78cb-4e36-a2a7-e1818a2b093b] Running
	I0911 11:28:35.308144  227744 system_pods.go:61] "etcd-multinode-517978" [e8ee6b0b-aa4d-4315-8ce1-13e67c030138] Running
	I0911 11:28:35.308148  227744 system_pods.go:61] "kindnet-4qgdc" [54ada390-018a-48d2-841d-5b48f8117601] Running
	I0911 11:28:35.308156  227744 system_pods.go:61] "kube-apiserver-multinode-517978" [9dc7326e-a6f6-4477-9175-5db6d08e3c2d] Running
	I0911 11:28:35.308164  227744 system_pods.go:61] "kube-controller-manager-multinode-517978" [0ed00710-145d-4aad-91c2-df770397db59] Running
	I0911 11:28:35.308169  227744 system_pods.go:61] "kube-proxy-s8g9f" [68f14c0f-00e4-4014-9613-36142d843e61] Running
	I0911 11:28:35.308173  227744 system_pods.go:61] "kube-scheduler-multinode-517978" [d30acde8-4c9a-4857-b218-979934d9d41d] Running
	I0911 11:28:35.308179  227744 system_pods.go:61] "storage-provisioner" [41366aba-7ecd-49af-a3e7-4139062a82c2] Running
	I0911 11:28:35.308184  227744 system_pods.go:74] duration metric: took 186.074761ms to wait for pod list to return data ...
	I0911 11:28:35.308201  227744 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:28:35.502665  227744 request.go:629] Waited for 194.393534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:28:35.502749  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:28:35.502758  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.502766  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.502773  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.505151  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:35.505172  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.505180  227744 round_trippers.go:580]     Audit-Id: 6eb161b7-511a-4e7d-8230-549fce2ec5da
	I0911 11:28:35.505186  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.505191  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.505197  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.505203  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.505210  227744 round_trippers.go:580]     Content-Length: 261
	I0911 11:28:35.505215  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.505238  227744 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d8ab9e4c-0b39-4177-a7a9-bfe2f12b13b3","resourceVersion":"330","creationTimestamp":"2023-09-11T11:28:01Z"}}]}
	I0911 11:28:35.505427  227744 default_sa.go:45] found service account: "default"
	I0911 11:28:35.505440  227744 default_sa.go:55] duration metric: took 197.235275ms for default service account to be created ...
	I0911 11:28:35.505448  227744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:28:35.702867  227744 request.go:629] Waited for 197.357308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:28:35.702984  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:28:35.702995  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.703003  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.703009  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.706207  227744 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:28:35.706236  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.706247  227744 round_trippers.go:580]     Audit-Id: 51c4691a-0fc6-48d9-b241-9f279a281526
	I0911 11:28:35.706255  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.706265  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.706273  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.706291  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.706302  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.706798  227744 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"413","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0911 11:28:35.708539  227744 system_pods.go:86] 8 kube-system pods found
	I0911 11:28:35.708562  227744 system_pods.go:89] "coredns-5dd5756b68-lmlsc" [b64f2269-78cb-4e36-a2a7-e1818a2b093b] Running
	I0911 11:28:35.708589  227744 system_pods.go:89] "etcd-multinode-517978" [e8ee6b0b-aa4d-4315-8ce1-13e67c030138] Running
	I0911 11:28:35.708599  227744 system_pods.go:89] "kindnet-4qgdc" [54ada390-018a-48d2-841d-5b48f8117601] Running
	I0911 11:28:35.708604  227744 system_pods.go:89] "kube-apiserver-multinode-517978" [9dc7326e-a6f6-4477-9175-5db6d08e3c2d] Running
	I0911 11:28:35.708609  227744 system_pods.go:89] "kube-controller-manager-multinode-517978" [0ed00710-145d-4aad-91c2-df770397db59] Running
	I0911 11:28:35.708613  227744 system_pods.go:89] "kube-proxy-s8g9f" [68f14c0f-00e4-4014-9613-36142d843e61] Running
	I0911 11:28:35.708618  227744 system_pods.go:89] "kube-scheduler-multinode-517978" [d30acde8-4c9a-4857-b218-979934d9d41d] Running
	I0911 11:28:35.708624  227744 system_pods.go:89] "storage-provisioner" [41366aba-7ecd-49af-a3e7-4139062a82c2] Running
	I0911 11:28:35.708632  227744 system_pods.go:126] duration metric: took 203.179104ms to wait for k8s-apps to be running ...
	I0911 11:28:35.708645  227744 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:28:35.708689  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:28:35.719526  227744 system_svc.go:56] duration metric: took 10.870223ms WaitForService to wait for kubelet.
	I0911 11:28:35.719555  227744 kubeadm.go:581] duration metric: took 33.80929415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:28:35.719579  227744 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:28:35.903021  227744 request.go:629] Waited for 183.343411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0911 11:28:35.903079  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0911 11:28:35.903084  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:35.903092  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:35.903098  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:35.905212  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:35.905234  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:35.905241  227744 round_trippers.go:580]     Audit-Id: 7eb70971-99c5-45b2-b05f-45a9e88ecc78
	I0911 11:28:35.905247  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:35.905253  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:35.905259  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:35.905265  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:35.905273  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:35 GMT
	I0911 11:28:35.905471  227744 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0911 11:28:35.905824  227744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0911 11:28:35.905839  227744 node_conditions.go:123] node cpu capacity is 8
	I0911 11:28:35.905850  227744 node_conditions.go:105] duration metric: took 186.265491ms to run NodePressure ...
	I0911 11:28:35.905862  227744 start.go:228] waiting for startup goroutines ...
	I0911 11:28:35.905876  227744 start.go:233] waiting for cluster config update ...
	I0911 11:28:35.905884  227744 start.go:242] writing updated cluster config ...
	I0911 11:28:35.908809  227744 out.go:177] 
	I0911 11:28:35.910576  227744 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:28:35.910673  227744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json ...
	I0911 11:28:35.912681  227744 out.go:177] * Starting worker node multinode-517978-m02 in cluster multinode-517978
	I0911 11:28:35.914187  227744 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:28:35.915840  227744 out.go:177] * Pulling base image ...
	I0911 11:28:35.917397  227744 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:28:35.917428  227744 cache.go:57] Caching tarball of preloaded images
	I0911 11:28:35.917512  227744 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:28:35.917528  227744 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:28:35.917537  227744 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:28:35.917642  227744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json ...
	I0911 11:28:35.933818  227744 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:28:35.933838  227744 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:28:35.933851  227744 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:28:35.933886  227744 start.go:365] acquiring machines lock for multinode-517978-m02: {Name:mk64ed57cada1f59cb66952d460c6dea5bbf86a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:28:35.933998  227744 start.go:369] acquired machines lock for "multinode-517978-m02" in 89.324µs
	I0911 11:28:35.934023  227744 start.go:93] Provisioning new machine with config: &{Name:multinode-517978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:28:35.934145  227744 start.go:125] createHost starting for "m02" (driver="docker")
	I0911 11:28:35.936275  227744 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0911 11:28:35.936360  227744 start.go:159] libmachine.API.Create for "multinode-517978" (driver="docker")
	I0911 11:28:35.936388  227744 client.go:168] LocalClient.Create starting
	I0911 11:28:35.936453  227744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:28:35.936491  227744 main.go:141] libmachine: Decoding PEM data...
	I0911 11:28:35.936513  227744 main.go:141] libmachine: Parsing certificate...
	I0911 11:28:35.936571  227744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:28:35.936598  227744 main.go:141] libmachine: Decoding PEM data...
	I0911 11:28:35.936616  227744 main.go:141] libmachine: Parsing certificate...
	I0911 11:28:35.936832  227744 cli_runner.go:164] Run: docker network inspect multinode-517978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:28:35.953854  227744 network_create.go:76] Found existing network {name:multinode-517978 subnet:0xc0011a4240 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0911 11:28:35.953907  227744 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-517978-m02" container
	I0911 11:28:35.953976  227744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:28:35.969492  227744 cli_runner.go:164] Run: docker volume create multinode-517978-m02 --label name.minikube.sigs.k8s.io=multinode-517978-m02 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:28:35.987377  227744 oci.go:103] Successfully created a docker volume multinode-517978-m02
	I0911 11:28:35.987465  227744 cli_runner.go:164] Run: docker run --rm --name multinode-517978-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-517978-m02 --entrypoint /usr/bin/test -v multinode-517978-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:28:36.512179  227744 oci.go:107] Successfully prepared a docker volume multinode-517978-m02
	I0911 11:28:36.512222  227744 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:28:36.512242  227744 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:28:36.512302  227744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-517978-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:28:41.562000  227744 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-517978-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (5.049641586s)
	I0911 11:28:41.562033  227744 kic.go:199] duration metric: took 5.049786 seconds to extract preloaded images to volume
	W0911 11:28:41.562187  227744 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:28:41.562281  227744 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:28:41.617739  227744 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-517978-m02 --name multinode-517978-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-517978-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-517978-m02 --network multinode-517978 --ip 192.168.58.3 --volume multinode-517978-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:28:41.912408  227744 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Running}}
	I0911 11:28:41.928436  227744 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Status}}
	I0911 11:28:41.946285  227744 cli_runner.go:164] Run: docker exec multinode-517978-m02 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:28:41.989667  227744 oci.go:144] the created container "multinode-517978-m02" has a running status.
	I0911 11:28:41.989713  227744 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa...
	I0911 11:28:42.203380  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0911 11:28:42.203463  227744 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:28:42.223962  227744 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Status}}
	I0911 11:28:42.242201  227744 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:28:42.242228  227744 kic_runner.go:114] Args: [docker exec --privileged multinode-517978-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:28:42.311098  227744 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Status}}
	I0911 11:28:42.333593  227744 machine.go:88] provisioning docker machine ...
	I0911 11:28:42.333640  227744 ubuntu.go:169] provisioning hostname "multinode-517978-m02"
	I0911 11:28:42.333708  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:42.355784  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:28:42.356301  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0911 11:28:42.356319  227744 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517978-m02 && echo "multinode-517978-m02" | sudo tee /etc/hostname
	I0911 11:28:42.572099  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517978-m02
	
	I0911 11:28:42.572178  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:42.589919  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:28:42.590387  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0911 11:28:42.590409  227744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517978-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517978-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517978-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:28:42.722016  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:28:42.722038  227744 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:28:42.722054  227744 ubuntu.go:177] setting up certificates
	I0911 11:28:42.722063  227744 provision.go:83] configureAuth start
	I0911 11:28:42.722183  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978-m02
	I0911 11:28:42.738220  227744 provision.go:138] copyHostCerts
	I0911 11:28:42.738256  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:28:42.738285  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:28:42.738297  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:28:42.738386  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:28:42.738486  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:28:42.738505  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:28:42.738511  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:28:42.738538  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:28:42.738606  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:28:42.738622  227744 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:28:42.738629  227744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:28:42.738651  227744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:28:42.738696  227744 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.multinode-517978-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-517978-m02]
	I0911 11:28:42.793838  227744 provision.go:172] copyRemoteCerts
	I0911 11:28:42.793902  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:28:42.793936  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:42.810331  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:28:42.902296  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:28:42.902363  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:28:42.924121  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:28:42.924186  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0911 11:28:42.944683  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:28:42.944740  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:28:42.966254  227744 provision.go:86] duration metric: configureAuth took 244.170305ms
	I0911 11:28:42.966281  227744 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:28:42.966449  227744 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:28:42.966560  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:42.982366  227744 main.go:141] libmachine: Using SSH client type: native
	I0911 11:28:42.982776  227744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0911 11:28:42.982794  227744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:28:43.192010  227744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:28:43.192040  227744 machine.go:91] provisioned docker machine in 858.41732ms
	I0911 11:28:43.192051  227744 client.go:171] LocalClient.Create took 7.255655753s
	I0911 11:28:43.192073  227744 start.go:167] duration metric: libmachine.API.Create for "multinode-517978" took 7.255712325s
	I0911 11:28:43.192083  227744 start.go:300] post-start starting for "multinode-517978-m02" (driver="docker")
	I0911 11:28:43.192093  227744 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:28:43.192175  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:28:43.192218  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:43.209120  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:28:43.306528  227744 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:28:43.309431  227744 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0911 11:28:43.309448  227744 command_runner.go:130] > NAME="Ubuntu"
	I0911 11:28:43.309457  227744 command_runner.go:130] > VERSION_ID="22.04"
	I0911 11:28:43.309465  227744 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0911 11:28:43.309472  227744 command_runner.go:130] > VERSION_CODENAME=jammy
	I0911 11:28:43.309479  227744 command_runner.go:130] > ID=ubuntu
	I0911 11:28:43.309485  227744 command_runner.go:130] > ID_LIKE=debian
	I0911 11:28:43.309494  227744 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0911 11:28:43.309503  227744 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0911 11:28:43.309512  227744 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0911 11:28:43.309525  227744 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0911 11:28:43.309530  227744 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0911 11:28:43.309575  227744 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:28:43.309596  227744 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:28:43.309603  227744 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:28:43.309610  227744 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:28:43.309619  227744 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:28:43.309678  227744 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:28:43.309766  227744 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:28:43.309778  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /etc/ssl/certs/1434172.pem
	I0911 11:28:43.309862  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:28:43.317422  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:28:43.338342  227744 start.go:303] post-start completed in 146.243028ms
	I0911 11:28:43.338702  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978-m02
	I0911 11:28:43.354287  227744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/config.json ...
	I0911 11:28:43.354516  227744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:28:43.354556  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:43.371710  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:28:43.458518  227744 command_runner.go:130] > 23%!
	(MISSING)I0911 11:28:43.458783  227744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:28:43.462669  227744 command_runner.go:130] > 225G
	I0911 11:28:43.462848  227744 start.go:128] duration metric: createHost completed in 7.528690799s
	I0911 11:28:43.462865  227744 start.go:83] releasing machines lock for "multinode-517978-m02", held for 7.528855193s
	I0911 11:28:43.462917  227744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978-m02
	I0911 11:28:43.481379  227744 out.go:177] * Found network options:
	I0911 11:28:43.482897  227744 out.go:177]   - NO_PROXY=192.168.58.2
	W0911 11:28:43.484343  227744 proxy.go:119] fail to check proxy env: Error ip not in block
	W0911 11:28:43.484397  227744 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:28:43.484471  227744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:28:43.484519  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:43.484537  227744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:28:43.484620  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:28:43.503116  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:28:43.503351  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:28:43.723462  227744 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:28:43.723554  227744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:28:43.727559  227744 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0911 11:28:43.727578  227744 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0911 11:28:43.727586  227744 command_runner.go:130] > Device: b0h/176d	Inode: 4167201     Links: 1
	I0911 11:28:43.727597  227744 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:28:43.727616  227744 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0911 11:28:43.727626  227744 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0911 11:28:43.727634  227744 command_runner.go:130] > Change: 2023-09-11 11:09:30.748271188 +0000
	I0911 11:28:43.727644  227744 command_runner.go:130] >  Birth: 2023-09-11 11:09:30.748271188 +0000
	I0911 11:28:43.727739  227744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:28:43.744710  227744 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:28:43.744810  227744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:28:43.769944  227744 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0911 11:28:43.769988  227744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:28:43.769996  227744 start.go:466] detecting cgroup driver to use...
	I0911 11:28:43.770021  227744 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:28:43.770062  227744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:28:43.783502  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:28:43.793716  227744 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:28:43.793773  227744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:28:43.805634  227744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:28:43.817691  227744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:28:43.879733  227744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:28:43.892781  227744 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0911 11:28:43.963373  227744 docker.go:212] disabling docker service ...
	I0911 11:28:43.963446  227744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:28:43.980748  227744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:28:43.991223  227744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:28:44.072959  227744 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0911 11:28:44.073036  227744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:28:44.160346  227744 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0911 11:28:44.160418  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:28:44.171618  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:28:44.186215  227744 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:28:44.186251  227744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:28:44.186296  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:28:44.195164  227744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:28:44.195228  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:28:44.204147  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:28:44.212681  227744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:28:44.221421  227744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:28:44.229425  227744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:28:44.235987  227744 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0911 11:28:44.236620  227744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:28:44.244154  227744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:28:44.314776  227744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:28:44.402511  227744 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:28:44.402571  227744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:28:44.405789  227744 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:28:44.405809  227744 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:28:44.405817  227744 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I0911 11:28:44.405823  227744 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:28:44.405828  227744 command_runner.go:130] > Access: 2023-09-11 11:28:44.388088602 +0000
	I0911 11:28:44.405835  227744 command_runner.go:130] > Modify: 2023-09-11 11:28:44.388088602 +0000
	I0911 11:28:44.405847  227744 command_runner.go:130] > Change: 2023-09-11 11:28:44.388088602 +0000
	I0911 11:28:44.405860  227744 command_runner.go:130] >  Birth: -
	I0911 11:28:44.405881  227744 start.go:534] Will wait 60s for crictl version
	I0911 11:28:44.405932  227744 ssh_runner.go:195] Run: which crictl
	I0911 11:28:44.408811  227744 command_runner.go:130] > /usr/bin/crictl
	I0911 11:28:44.408878  227744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:28:44.438379  227744 command_runner.go:130] > Version:  0.1.0
	I0911 11:28:44.438401  227744 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:28:44.438408  227744 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0911 11:28:44.438414  227744 command_runner.go:130] > RuntimeApiVersion:  v1
	I0911 11:28:44.440332  227744 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:28:44.440394  227744 ssh_runner.go:195] Run: crio --version
	I0911 11:28:44.473145  227744 command_runner.go:130] > crio version 1.24.6
	I0911 11:28:44.473165  227744 command_runner.go:130] > Version:          1.24.6
	I0911 11:28:44.473173  227744 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0911 11:28:44.473177  227744 command_runner.go:130] > GitTreeState:     clean
	I0911 11:28:44.473185  227744 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0911 11:28:44.473190  227744 command_runner.go:130] > GoVersion:        go1.18.2
	I0911 11:28:44.473194  227744 command_runner.go:130] > Compiler:         gc
	I0911 11:28:44.473198  227744 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:28:44.473204  227744 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:28:44.473211  227744 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:28:44.473216  227744 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:28:44.473219  227744 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:28:44.473282  227744 ssh_runner.go:195] Run: crio --version
	I0911 11:28:44.504603  227744 command_runner.go:130] > crio version 1.24.6
	I0911 11:28:44.504622  227744 command_runner.go:130] > Version:          1.24.6
	I0911 11:28:44.504630  227744 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0911 11:28:44.504634  227744 command_runner.go:130] > GitTreeState:     clean
	I0911 11:28:44.504639  227744 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0911 11:28:44.504644  227744 command_runner.go:130] > GoVersion:        go1.18.2
	I0911 11:28:44.504648  227744 command_runner.go:130] > Compiler:         gc
	I0911 11:28:44.504652  227744 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:28:44.504657  227744 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:28:44.504665  227744 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:28:44.504672  227744 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:28:44.504676  227744 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:28:44.508311  227744 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:28:44.509912  227744 out.go:177]   - env NO_PROXY=192.168.58.2
	I0911 11:28:44.511539  227744 cli_runner.go:164] Run: docker network inspect multinode-517978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:28:44.527681  227744 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0911 11:28:44.531216  227744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:28:44.540805  227744 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978 for IP: 192.168.58.3
	I0911 11:28:44.540833  227744 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:28:44.540957  227744 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:28:44.540994  227744 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:28:44.541006  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:28:44.541020  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:28:44.541031  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:28:44.541043  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:28:44.541097  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:28:44.541129  227744 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:28:44.541139  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:28:44.541164  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:28:44.541191  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:28:44.541214  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:28:44.541251  227744 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:28:44.541276  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> /usr/share/ca-certificates/1434172.pem
	I0911 11:28:44.541289  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:28:44.541300  227744 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem -> /usr/share/ca-certificates/143417.pem
	I0911 11:28:44.541654  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:28:44.563356  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:28:44.585004  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:28:44.605936  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:28:44.626385  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:28:44.646959  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:28:44.667704  227744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:28:44.688370  227744 ssh_runner.go:195] Run: openssl version
	I0911 11:28:44.693011  227744 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0911 11:28:44.693178  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:28:44.701852  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:28:44.705123  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:28:44.705150  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:28:44.705192  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:28:44.711007  227744 command_runner.go:130] > 3ec20f2e
	I0911 11:28:44.711203  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:28:44.719422  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:28:44.728107  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:28:44.731215  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:28:44.731240  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:28:44.731270  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:28:44.737133  227744 command_runner.go:130] > b5213941
	I0911 11:28:44.737323  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:28:44.745313  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:28:44.753251  227744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:28:44.756326  227744 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:28:44.756363  227744 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:28:44.756397  227744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:28:44.762128  227744 command_runner.go:130] > 51391683
	I0911 11:28:44.762333  227744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:28:44.770424  227744 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:28:44.773344  227744 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:28:44.773436  227744 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:28:44.773545  227744 ssh_runner.go:195] Run: crio config
	I0911 11:28:44.811150  227744 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:28:44.811182  227744 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:28:44.811193  227744 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:28:44.811199  227744 command_runner.go:130] > #
	I0911 11:28:44.811212  227744 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:28:44.811223  227744 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:28:44.811233  227744 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:28:44.811245  227744 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:28:44.811252  227744 command_runner.go:130] > # reload'.
	I0911 11:28:44.811263  227744 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:28:44.811278  227744 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:28:44.811290  227744 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:28:44.811302  227744 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:28:44.811311  227744 command_runner.go:130] > [crio]
	I0911 11:28:44.811322  227744 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:28:44.811330  227744 command_runner.go:130] > # containers images, in this directory.
	I0911 11:28:44.811348  227744 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0911 11:28:44.811365  227744 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:28:44.811375  227744 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0911 11:28:44.811385  227744 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:28:44.811395  227744 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:28:44.811407  227744 command_runner.go:130] > # storage_driver = "vfs"
	I0911 11:28:44.811417  227744 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:28:44.811439  227744 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:28:44.811446  227744 command_runner.go:130] > # storage_option = [
	I0911 11:28:44.811452  227744 command_runner.go:130] > # ]
	I0911 11:28:44.811463  227744 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:28:44.811473  227744 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:28:44.811481  227744 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:28:44.811491  227744 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:28:44.811501  227744 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:28:44.811509  227744 command_runner.go:130] > # always happen on a node reboot
	I0911 11:28:44.811518  227744 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:28:44.811530  227744 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:28:44.811545  227744 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:28:44.811558  227744 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:28:44.811571  227744 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:28:44.811584  227744 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:28:44.811600  227744 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:28:44.811611  227744 command_runner.go:130] > # internal_wipe = true
	I0911 11:28:44.811621  227744 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:28:44.811634  227744 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:28:44.811647  227744 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:28:44.811661  227744 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:28:44.811675  227744 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:28:44.811691  227744 command_runner.go:130] > [crio.api]
	I0911 11:28:44.811711  227744 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:28:44.811720  227744 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:28:44.811732  227744 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:28:44.811744  227744 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:28:44.811755  227744 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:28:44.811768  227744 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:28:44.811775  227744 command_runner.go:130] > # stream_port = "0"
	I0911 11:28:44.811787  227744 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:28:44.811799  227744 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:28:44.811808  227744 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:28:44.811815  227744 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:28:44.811826  227744 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:28:44.811840  227744 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:28:44.811849  227744 command_runner.go:130] > # minutes.
	I0911 11:28:44.811860  227744 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:28:44.811870  227744 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:28:44.811885  227744 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:28:44.811895  227744 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:28:44.811905  227744 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:28:44.811920  227744 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:28:44.811932  227744 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:28:44.811941  227744 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:28:44.811954  227744 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:28:44.811965  227744 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0911 11:28:44.811977  227744 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:28:44.811988  227744 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0911 11:28:44.812042  227744 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:28:44.812061  227744 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:28:44.812068  227744 command_runner.go:130] > [crio.runtime]
	I0911 11:28:44.812086  227744 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:28:44.812099  227744 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:28:44.812106  227744 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:28:44.812118  227744 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:28:44.812129  227744 command_runner.go:130] > # default_ulimits = [
	I0911 11:28:44.812134  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812148  227744 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:28:44.812155  227744 command_runner.go:130] > # no_pivot = false
	I0911 11:28:44.812169  227744 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:28:44.812186  227744 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:28:44.812197  227744 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:28:44.812207  227744 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:28:44.812218  227744 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:28:44.812230  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:28:44.812242  227744 command_runner.go:130] > # conmon = ""
	I0911 11:28:44.812249  227744 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:28:44.812261  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:28:44.812271  227744 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:28:44.812283  227744 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:28:44.812295  227744 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:28:44.812306  227744 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:28:44.812326  227744 command_runner.go:130] > # conmon_env = [
	I0911 11:28:44.812332  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812347  227744 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:28:44.812356  227744 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:28:44.812369  227744 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:28:44.812379  227744 command_runner.go:130] > # default_env = [
	I0911 11:28:44.812385  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812408  227744 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:28:44.812415  227744 command_runner.go:130] > # selinux = false
	I0911 11:28:44.812424  227744 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:28:44.812433  227744 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:28:44.812447  227744 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:28:44.812457  227744 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:28:44.812464  227744 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:28:44.812471  227744 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:28:44.812479  227744 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:28:44.812485  227744 command_runner.go:130] > # which might increase security.
	I0911 11:28:44.812490  227744 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0911 11:28:44.812498  227744 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:28:44.812506  227744 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:28:44.812513  227744 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:28:44.812521  227744 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:28:44.812527  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:28:44.812533  227744 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:28:44.812543  227744 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:28:44.812549  227744 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:28:44.812555  227744 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:28:44.812565  227744 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:28:44.812577  227744 command_runner.go:130] > # irqbalance daemon.
	I0911 11:28:44.812586  227744 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:28:44.812597  227744 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:28:44.812609  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:28:44.812618  227744 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:28:44.812625  227744 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:28:44.812631  227744 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:28:44.812638  227744 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:28:44.812644  227744 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:28:44.812651  227744 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:28:44.812659  227744 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:28:44.812663  227744 command_runner.go:130] > # will be added.
	I0911 11:28:44.812669  227744 command_runner.go:130] > # default_capabilities = [
	I0911 11:28:44.812673  227744 command_runner.go:130] > # 	"CHOWN",
	I0911 11:28:44.812679  227744 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:28:44.812683  227744 command_runner.go:130] > # 	"FSETID",
	I0911 11:28:44.812689  227744 command_runner.go:130] > # 	"FOWNER",
	I0911 11:28:44.812693  227744 command_runner.go:130] > # 	"SETGID",
	I0911 11:28:44.812696  227744 command_runner.go:130] > # 	"SETUID",
	I0911 11:28:44.812700  227744 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:28:44.812704  227744 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:28:44.812707  227744 command_runner.go:130] > # 	"KILL",
	I0911 11:28:44.812710  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812718  227744 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0911 11:28:44.812727  227744 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0911 11:28:44.812731  227744 command_runner.go:130] > # add_inheritable_capabilities = true
	I0911 11:28:44.812739  227744 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:28:44.812745  227744 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:28:44.812751  227744 command_runner.go:130] > # default_sysctls = [
	I0911 11:28:44.812754  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812758  227744 command_runner.go:130] > # List of devices on the host that a
	I0911 11:28:44.812764  227744 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:28:44.812773  227744 command_runner.go:130] > # allowed_devices = [
	I0911 11:28:44.812777  227744 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:28:44.812783  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812788  227744 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:28:44.812838  227744 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:28:44.812853  227744 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:28:44.812862  227744 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:28:44.812869  227744 command_runner.go:130] > # additional_devices = [
	I0911 11:28:44.812881  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812886  227744 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:28:44.812890  227744 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:28:44.812897  227744 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:28:44.812901  227744 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:28:44.812907  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812913  227744 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:28:44.812921  227744 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:28:44.812927  227744 command_runner.go:130] > # Defaults to false.
	I0911 11:28:44.812932  227744 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:28:44.812940  227744 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:28:44.812948  227744 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:28:44.812955  227744 command_runner.go:130] > # hooks_dir = [
	I0911 11:28:44.812959  227744 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:28:44.812965  227744 command_runner.go:130] > # ]
	I0911 11:28:44.812971  227744 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:28:44.812980  227744 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:28:44.812987  227744 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:28:44.812993  227744 command_runner.go:130] > #
	I0911 11:28:44.813001  227744 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:28:44.813009  227744 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:28:44.813016  227744 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:28:44.813022  227744 command_runner.go:130] > #
	I0911 11:28:44.813028  227744 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:28:44.813036  227744 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:28:44.813042  227744 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:28:44.813049  227744 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:28:44.813053  227744 command_runner.go:130] > #
	I0911 11:28:44.813061  227744 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:28:44.813069  227744 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:28:44.813078  227744 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:28:44.813089  227744 command_runner.go:130] > # pids_limit = 0
	I0911 11:28:44.813095  227744 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:28:44.813103  227744 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:28:44.813111  227744 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:28:44.813119  227744 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:28:44.813125  227744 command_runner.go:130] > # log_size_max = -1
	I0911 11:28:44.813132  227744 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:28:44.813138  227744 command_runner.go:130] > # log_to_journald = false
	I0911 11:28:44.813144  227744 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:28:44.813151  227744 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:28:44.813156  227744 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:28:44.813163  227744 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:28:44.813168  227744 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:28:44.813175  227744 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:28:44.813180  227744 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:28:44.813186  227744 command_runner.go:130] > # read_only = false
	I0911 11:28:44.813193  227744 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:28:44.813201  227744 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:28:44.813205  227744 command_runner.go:130] > # live configuration reload.
	I0911 11:28:44.813211  227744 command_runner.go:130] > # log_level = "info"
	I0911 11:28:44.813216  227744 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:28:44.813223  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:28:44.813227  227744 command_runner.go:130] > # log_filter = ""
	I0911 11:28:44.813236  227744 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:28:44.813244  227744 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:28:44.813249  227744 command_runner.go:130] > # separated by comma.
	I0911 11:28:44.813253  227744 command_runner.go:130] > # uid_mappings = ""
	I0911 11:28:44.813261  227744 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:28:44.813268  227744 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:28:44.813273  227744 command_runner.go:130] > # separated by comma.
	I0911 11:28:44.813279  227744 command_runner.go:130] > # gid_mappings = ""
	I0911 11:28:44.813285  227744 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:28:44.813293  227744 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:28:44.813302  227744 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:28:44.813308  227744 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:28:44.813314  227744 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:28:44.813322  227744 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:28:44.813330  227744 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:28:44.813339  227744 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:28:44.813347  227744 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:28:44.813354  227744 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:28:44.813362  227744 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:28:44.813366  227744 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:28:44.813374  227744 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:28:44.813403  227744 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:28:44.813416  227744 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:28:44.813424  227744 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:28:44.813434  227744 command_runner.go:130] > # drop_infra_ctr = true
	I0911 11:28:44.813448  227744 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:28:44.813459  227744 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:28:44.813466  227744 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:28:44.813470  227744 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:28:44.813476  227744 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:28:44.813480  227744 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:28:44.813487  227744 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:28:44.813494  227744 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:28:44.813500  227744 command_runner.go:130] > # pinns_path = ""
	I0911 11:28:44.813506  227744 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:28:44.813512  227744 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:28:44.813518  227744 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:28:44.813524  227744 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:28:44.813529  227744 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:28:44.813538  227744 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:28:44.813549  227744 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:28:44.813556  227744 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:28:44.813564  227744 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:28:44.813571  227744 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:28:44.813575  227744 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:28:44.813581  227744 command_runner.go:130] > # ]
	I0911 11:28:44.813592  227744 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:28:44.813601  227744 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:28:44.813609  227744 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:28:44.813618  227744 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:28:44.813623  227744 command_runner.go:130] > #
	I0911 11:28:44.813628  227744 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:28:44.813635  227744 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:28:44.813640  227744 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:28:44.813647  227744 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:28:44.813652  227744 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:28:44.813659  227744 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:28:44.813662  227744 command_runner.go:130] > # Where:
	I0911 11:28:44.813670  227744 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:28:44.813676  227744 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:28:44.813684  227744 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:28:44.813692  227744 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:28:44.813698  227744 command_runner.go:130] > #   in $PATH.
	I0911 11:28:44.813704  227744 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:28:44.813711  227744 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:28:44.813717  227744 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:28:44.813723  227744 command_runner.go:130] > #   state.
	I0911 11:28:44.813729  227744 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:28:44.813737  227744 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:28:44.813743  227744 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:28:44.813751  227744 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:28:44.813759  227744 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:28:44.813768  227744 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:28:44.813775  227744 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:28:44.813781  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:28:44.813790  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:28:44.813798  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:28:44.813806  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:28:44.813815  227744 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:28:44.813825  227744 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:28:44.813833  227744 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:28:44.813841  227744 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:28:44.813850  227744 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:28:44.813856  227744 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:28:44.813861  227744 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0911 11:28:44.813867  227744 command_runner.go:130] > runtime_type = "oci"
	I0911 11:28:44.813871  227744 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:28:44.813878  227744 command_runner.go:130] > runtime_config_path = ""
	I0911 11:28:44.813881  227744 command_runner.go:130] > monitor_path = ""
	I0911 11:28:44.813888  227744 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:28:44.813892  227744 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:28:44.813955  227744 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:28:44.813968  227744 command_runner.go:130] > # running containers
	I0911 11:28:44.813973  227744 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:28:44.813981  227744 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:28:44.813988  227744 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:28:44.813996  227744 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:28:44.814002  227744 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:28:44.814009  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:28:44.814014  227744 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:28:44.814021  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:28:44.814026  227744 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:28:44.814032  227744 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:28:44.814075  227744 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:28:44.814111  227744 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:28:44.814123  227744 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:28:44.814137  227744 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:28:44.814156  227744 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:28:44.814175  227744 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:28:44.814203  227744 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:28:44.814227  227744 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:28:44.814238  227744 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:28:44.814253  227744 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:28:44.814262  227744 command_runner.go:130] > # Example:
	I0911 11:28:44.814271  227744 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:28:44.814278  227744 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:28:44.814283  227744 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:28:44.814291  227744 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:28:44.814296  227744 command_runner.go:130] > # cpuset = 0
	I0911 11:28:44.814303  227744 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:28:44.814307  227744 command_runner.go:130] > # Where:
	I0911 11:28:44.814314  227744 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:28:44.814321  227744 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:28:44.814329  227744 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:28:44.814339  227744 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:28:44.814350  227744 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:28:44.814359  227744 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:28:44.814364  227744 command_runner.go:130] > # 
	I0911 11:28:44.814371  227744 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:28:44.814377  227744 command_runner.go:130] > #
	I0911 11:28:44.814383  227744 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:28:44.814391  227744 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:28:44.814399  227744 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:28:44.814410  227744 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:28:44.814418  227744 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:28:44.814424  227744 command_runner.go:130] > [crio.image]
	I0911 11:28:44.814430  227744 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:28:44.814437  227744 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:28:44.814443  227744 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:28:44.814451  227744 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:28:44.814457  227744 command_runner.go:130] > # global_auth_file = ""
	I0911 11:28:44.814462  227744 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:28:44.814469  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:28:44.814474  227744 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:28:44.814483  227744 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:28:44.814489  227744 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:28:44.814497  227744 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:28:44.814501  227744 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:28:44.814509  227744 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:28:44.814517  227744 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:28:44.814525  227744 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:28:44.814532  227744 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:28:44.814538  227744 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:28:44.814544  227744 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:28:44.814553  227744 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:28:44.814561  227744 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:28:44.814569  227744 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:28:44.814574  227744 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:28:44.814582  227744 command_runner.go:130] > # signature_policy = ""
	I0911 11:28:44.814627  227744 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:28:44.814641  227744 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:28:44.814651  227744 command_runner.go:130] > # changing them here.
	I0911 11:28:44.814660  227744 command_runner.go:130] > # insecure_registries = [
	I0911 11:28:44.814668  227744 command_runner.go:130] > # ]
	I0911 11:28:44.814678  227744 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:28:44.814690  227744 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:28:44.814700  227744 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:28:44.814708  227744 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:28:44.814713  227744 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:28:44.814722  227744 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:28:44.814726  227744 command_runner.go:130] > # CNI plugins.
	I0911 11:28:44.814731  227744 command_runner.go:130] > [crio.network]
	I0911 11:28:44.814737  227744 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:28:44.814744  227744 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:28:44.814749  227744 command_runner.go:130] > # cni_default_network = ""
	I0911 11:28:44.814757  227744 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:28:44.814761  227744 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:28:44.814766  227744 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:28:44.814772  227744 command_runner.go:130] > # plugin_dirs = [
	I0911 11:28:44.814777  227744 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:28:44.814782  227744 command_runner.go:130] > # ]
	I0911 11:28:44.814788  227744 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:28:44.814794  227744 command_runner.go:130] > [crio.metrics]
	I0911 11:28:44.814799  227744 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:28:44.814806  227744 command_runner.go:130] > # enable_metrics = false
	I0911 11:28:44.814810  227744 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:28:44.814817  227744 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:28:44.814823  227744 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:28:44.814831  227744 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:28:44.814841  227744 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:28:44.814848  227744 command_runner.go:130] > # metrics_collectors = [
	I0911 11:28:44.814852  227744 command_runner.go:130] > # 	"operations",
	I0911 11:28:44.814859  227744 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:28:44.814866  227744 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:28:44.814870  227744 command_runner.go:130] > # 	"operations_errors",
	I0911 11:28:44.814879  227744 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:28:44.814885  227744 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:28:44.814890  227744 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:28:44.814897  227744 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:28:44.814901  227744 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:28:44.814908  227744 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:28:44.814912  227744 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:28:44.814919  227744 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:28:44.814923  227744 command_runner.go:130] > # 	"containers_oom",
	I0911 11:28:44.814927  227744 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:28:44.814933  227744 command_runner.go:130] > # 	"operations_total",
	I0911 11:28:44.814938  227744 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:28:44.814945  227744 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:28:44.814952  227744 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:28:44.814956  227744 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:28:44.814963  227744 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:28:44.814967  227744 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:28:44.814974  227744 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:28:44.814978  227744 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:28:44.814984  227744 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:28:44.814988  227744 command_runner.go:130] > # ]
	I0911 11:28:44.814995  227744 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:28:44.814999  227744 command_runner.go:130] > # metrics_port = 9090
	I0911 11:28:44.815006  227744 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:28:44.815010  227744 command_runner.go:130] > # metrics_socket = ""
	I0911 11:28:44.815018  227744 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:28:44.815026  227744 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:28:44.815034  227744 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:28:44.815041  227744 command_runner.go:130] > # certificate on any modification event.
	I0911 11:28:44.815045  227744 command_runner.go:130] > # metrics_cert = ""
	I0911 11:28:44.815052  227744 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:28:44.815058  227744 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:28:44.815065  227744 command_runner.go:130] > # metrics_key = ""
	I0911 11:28:44.815070  227744 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:28:44.815077  227744 command_runner.go:130] > [crio.tracing]
	I0911 11:28:44.815086  227744 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:28:44.815093  227744 command_runner.go:130] > # enable_tracing = false
	I0911 11:28:44.815098  227744 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:28:44.815104  227744 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:28:44.815110  227744 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:28:44.815116  227744 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:28:44.815122  227744 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:28:44.815128  227744 command_runner.go:130] > [crio.stats]
	I0911 11:28:44.815134  227744 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:28:44.815142  227744 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:28:44.815149  227744 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:28:44.816935  227744 command_runner.go:130] ! time="2023-09-11 11:28:44.808868829Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0911 11:28:44.816962  227744 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:28:44.817061  227744 cni.go:84] Creating CNI manager for ""
	I0911 11:28:44.817070  227744 cni.go:136] 2 nodes found, recommending kindnet
	I0911 11:28:44.817089  227744 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:28:44.817113  227744 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-517978 NodeName:multinode-517978-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:28:44.817238  227744 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-517978-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:28:44.817290  227744 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-517978-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:28:44.817346  227744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:28:44.825290  227744 command_runner.go:130] > kubeadm
	I0911 11:28:44.825314  227744 command_runner.go:130] > kubectl
	I0911 11:28:44.825321  227744 command_runner.go:130] > kubelet
	I0911 11:28:44.825945  227744 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:28:44.826000  227744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0911 11:28:44.833626  227744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0911 11:28:44.848880  227744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:28:44.864455  227744 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:28:44.867542  227744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:28:44.877652  227744 host.go:66] Checking if "multinode-517978" exists ...
	I0911 11:28:44.877894  227744 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:28:44.877932  227744 start.go:301] JoinCluster: &{Name:multinode-517978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-517978 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:28:44.878051  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0911 11:28:44.878131  227744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:28:44.895325  227744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:28:45.036536  227744 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dut88h.aw3imimtidamgjls --discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:28:45.036593  227744 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:28:45.036639  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dut88h.aw3imimtidamgjls --discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-517978-m02"
	I0911 11:28:45.070351  227744 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:28:45.098487  227744 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:28:45.098523  227744 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:28:45.098532  227744 command_runner.go:130] > OS: Linux
	I0911 11:28:45.098539  227744 command_runner.go:130] > CGROUPS_CPU: enabled
	I0911 11:28:45.098547  227744 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0911 11:28:45.098555  227744 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0911 11:28:45.098564  227744 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0911 11:28:45.098576  227744 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0911 11:28:45.098588  227744 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0911 11:28:45.098602  227744 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0911 11:28:45.098615  227744 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0911 11:28:45.098627  227744 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0911 11:28:45.176685  227744 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0911 11:28:45.176726  227744 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0911 11:28:45.202923  227744 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:28:45.202955  227744 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:28:45.202963  227744 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:28:45.285749  227744 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0911 11:28:47.298765  227744 command_runner.go:130] > This node has joined the cluster:
	I0911 11:28:47.298787  227744 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0911 11:28:47.298796  227744 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0911 11:28:47.298806  227744 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0911 11:28:47.301391  227744 command_runner.go:130] ! W0911 11:28:45.069810    1107 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0911 11:28:47.301427  227744 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:28:47.301452  227744 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:28:47.301481  227744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dut88h.aw3imimtidamgjls --discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-517978-m02": (2.264826984s)
	I0911 11:28:47.301499  227744 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0911 11:28:47.465383  227744 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0911 11:28:47.465422  227744 start.go:303] JoinCluster complete in 2.587488517s
	I0911 11:28:47.465437  227744 cni.go:84] Creating CNI manager for ""
	I0911 11:28:47.465444  227744 cni.go:136] 2 nodes found, recommending kindnet
	I0911 11:28:47.465494  227744 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:28:47.469121  227744 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:28:47.469142  227744 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0911 11:28:47.469148  227744 command_runner.go:130] > Device: 34h/52d	Inode: 4171654     Links: 1
	I0911 11:28:47.469155  227744 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:28:47.469168  227744 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0911 11:28:47.469173  227744 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0911 11:28:47.469178  227744 command_runner.go:130] > Change: 2023-09-11 11:09:31.132301758 +0000
	I0911 11:28:47.469185  227744 command_runner.go:130] >  Birth: 2023-09-11 11:09:31.108299847 +0000
	I0911 11:28:47.469233  227744 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:28:47.469244  227744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:28:47.485115  227744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:28:47.692842  227744 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:28:47.699045  227744 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:28:47.701768  227744 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0911 11:28:47.712360  227744 command_runner.go:130] > daemonset.apps/kindnet configured
	I0911 11:28:47.716755  227744 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:47.717080  227744 kapi.go:59] client config for multinode-517978: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:28:47.717505  227744 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:28:47.717523  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:47.717582  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:47.717600  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:47.719780  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:47.719796  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:47.719802  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:47.719808  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:47.719813  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:47.719819  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:47.719826  227744 round_trippers.go:580]     Content-Length: 291
	I0911 11:28:47.719832  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:47 GMT
	I0911 11:28:47.719840  227744 round_trippers.go:580]     Audit-Id: 55fb51ce-e8a7-4793-9ed0-de847ba51b1f
	I0911 11:28:47.719859  227744 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"27517565-a45b-4d59-9ce6-25ae123bbba6","resourceVersion":"417","creationTimestamp":"2023-09-11T11:27:48Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0911 11:28:47.719944  227744 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-517978" context rescaled to 1 replicas
	I0911 11:28:47.719971  227744 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:28:47.721858  227744 out.go:177] * Verifying Kubernetes components...
	I0911 11:28:47.723235  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:28:47.734272  227744 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:28:47.734481  227744 kapi.go:59] client config for multinode-517978: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/profiles/multinode-517978/client.key", CAFile:"/home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:28:47.734735  227744 node_ready.go:35] waiting up to 6m0s for node "multinode-517978-m02" to be "Ready" ...
	I0911 11:28:47.734793  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:47.734800  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:47.734808  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:47.734817  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:47.737000  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:47.737027  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:47.737039  227744 round_trippers.go:580]     Audit-Id: c285c833-2279-48e5-9ee4-4b9d30f61967
	I0911 11:28:47.737049  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:47.737068  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:47.737082  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:47.737096  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:47.737109  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:47 GMT
	I0911 11:28:47.737324  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:47.737815  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:47.737835  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:47.737843  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:47.737856  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:47.739684  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:28:47.739699  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:47.739706  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:47.739712  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:47.739717  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:47.739723  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:47.739728  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:47 GMT
	I0911 11:28:47.739735  227744 round_trippers.go:580]     Audit-Id: 1f559fe2-c7bd-4afc-b247-9e418f9188d9
	I0911 11:28:47.739902  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:48.240981  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:48.241007  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:48.241016  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:48.241023  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:48.243312  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:48.243339  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:48.243349  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:48 GMT
	I0911 11:28:48.243357  227744 round_trippers.go:580]     Audit-Id: 91fed9ee-205b-4fc4-b54d-3b12e215fa93
	I0911 11:28:48.243363  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:48.243371  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:48.243385  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:48.243404  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:48.243520  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:48.741092  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:48.741129  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:48.741139  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:48.741147  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:48.743491  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:48.743516  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:48.743528  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:48.743537  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:48.743546  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:48.743555  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:48 GMT
	I0911 11:28:48.743564  227744 round_trippers.go:580]     Audit-Id: 6d846db9-3661-447d-911f-2fd30a293bdd
	I0911 11:28:48.743577  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:48.743754  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:49.241383  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:49.241412  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:49.241423  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:49.241435  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:49.243781  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:49.243809  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:49.243818  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:49.243827  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:49.243836  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:49.243845  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:49 GMT
	I0911 11:28:49.243861  227744 round_trippers.go:580]     Audit-Id: 71b9c52f-a3c3-4e24-83e3-18faf0d678a4
	I0911 11:28:49.243874  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:49.244026  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:49.740420  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:49.740440  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:49.740448  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:49.740454  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:49.743016  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:49.743038  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:49.743045  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:49.743051  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:49 GMT
	I0911 11:28:49.743056  227744 round_trippers.go:580]     Audit-Id: 2accef80-19fb-47a3-8956-b74def36ee91
	I0911 11:28:49.743062  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:49.743067  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:49.743072  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:49.743186  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:49.743500  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:28:50.240876  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:50.240902  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:50.240912  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:50.240920  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:50.243290  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:50.243319  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:50.243330  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:50.243340  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:50.243349  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:50.243359  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:50.243377  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:50 GMT
	I0911 11:28:50.243386  227744 round_trippers.go:580]     Audit-Id: 7e70fd0b-9a8a-4f00-b191-b713069ede8f
	I0911 11:28:50.243518  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:50.741150  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:50.741171  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:50.741179  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:50.741188  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:50.743465  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:50.743489  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:50.743498  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:50 GMT
	I0911 11:28:50.743505  227744 round_trippers.go:580]     Audit-Id: 92c65792-f54e-4143-a737-84710b8107b0
	I0911 11:28:50.743513  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:50.743522  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:50.743530  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:50.743544  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:50.743631  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:51.241337  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:51.241360  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:51.241368  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:51.241374  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:51.243837  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:51.243860  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:51.243869  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:51 GMT
	I0911 11:28:51.243877  227744 round_trippers.go:580]     Audit-Id: 8c3a6d2c-a767-41e6-998f-cf1ce83b6188
	I0911 11:28:51.243885  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:51.243896  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:51.243904  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:51.243913  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:51.244039  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"450","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0911 11:28:51.740657  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:51.740679  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:51.740688  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:51.740694  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:51.743247  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:51.743271  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:51.743280  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:51.743289  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:51 GMT
	I0911 11:28:51.743297  227744 round_trippers.go:580]     Audit-Id: d82f0fe7-cd6e-4394-85c6-3012533665a0
	I0911 11:28:51.743303  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:51.743308  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:51.743314  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:51.743417  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:51.743727  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:28:52.241252  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:52.241278  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:52.241286  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:52.241292  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:52.243608  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:52.243646  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:52.243658  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:52.243671  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:52.243681  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:52.243691  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:52.243703  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:52 GMT
	I0911 11:28:52.243716  227744 round_trippers.go:580]     Audit-Id: ccc6ca68-1ee0-40a4-bcc3-c1e99f967af1
	I0911 11:28:52.243825  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:52.740500  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:52.740521  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:52.740529  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:52.740535  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:52.743031  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:52.743053  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:52.743063  227744 round_trippers.go:580]     Audit-Id: 1d542e98-c516-4939-aca1-6e1c957233da
	I0911 11:28:52.743071  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:52.743078  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:52.743086  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:52.743094  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:52.743116  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:52 GMT
	I0911 11:28:52.743242  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:53.240443  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:53.240465  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:53.240473  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:53.240480  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:53.242879  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:53.242897  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:53.242904  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:53.242910  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:53 GMT
	I0911 11:28:53.242915  227744 round_trippers.go:580]     Audit-Id: 2c33f650-21ef-43fc-b9ec-092ab89c13f5
	I0911 11:28:53.242920  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:53.242928  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:53.242938  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:53.243036  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:53.740637  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:53.740658  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:53.740665  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:53.740672  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:53.743240  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:53.743256  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:53.743264  227744 round_trippers.go:580]     Audit-Id: c636052e-a177-4330-bf0e-b34c78130f09
	I0911 11:28:53.743269  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:53.743275  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:53.743283  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:53.743292  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:53.743306  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:53 GMT
	I0911 11:28:53.743438  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:53.743757  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:28:54.241168  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:54.241195  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:54.241204  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:54.241215  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:54.243679  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:54.243698  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:54.243704  227744 round_trippers.go:580]     Audit-Id: f9d95e42-e097-4244-be7b-314903cb87a2
	I0911 11:28:54.243710  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:54.243716  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:54.243721  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:54.243728  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:54.243733  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:54 GMT
	I0911 11:28:54.243865  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:54.740442  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:54.740465  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:54.740472  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:54.740479  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:54.742857  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:54.742881  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:54.742892  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:54 GMT
	I0911 11:28:54.742899  227744 round_trippers.go:580]     Audit-Id: 3e198dc3-cc94-445f-bd83-dd7307a30a3d
	I0911 11:28:54.742907  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:54.742914  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:54.742923  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:54.742935  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:54.743119  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:55.240730  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:55.240753  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:55.240764  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:55.240775  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:55.243234  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:55.243270  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:55.243285  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:55.243293  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:55.243301  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:55.243309  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:55 GMT
	I0911 11:28:55.243317  227744 round_trippers.go:580]     Audit-Id: 2825fb54-abd2-4026-8164-9d4d4814741f
	I0911 11:28:55.243326  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:55.243432  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:55.741122  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:55.741147  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:55.741158  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:55.741173  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:55.743607  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:55.743629  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:55.743640  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:55.743650  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:55.743659  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:55 GMT
	I0911 11:28:55.743668  227744 round_trippers.go:580]     Audit-Id: 21770dbd-0bd3-454f-a01b-f9d196212356
	I0911 11:28:55.743684  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:55.743690  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:55.743779  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:55.744094  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:28:56.240337  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:56.240358  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:56.240366  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:56.240372  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:56.242764  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:56.242788  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:56.242799  227744 round_trippers.go:580]     Audit-Id: f7051ca8-8c1b-442e-be41-fb4b9013a7f7
	I0911 11:28:56.242808  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:56.242820  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:56.242833  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:56.242845  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:56.242857  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:56 GMT
	I0911 11:28:56.242973  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:56.740610  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:56.740629  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:56.740637  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:56.740651  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:56.743028  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:56.743052  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:56.743060  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:56.743066  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:56.743071  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:56.743077  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:56.743082  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:56 GMT
	I0911 11:28:56.743088  227744 round_trippers.go:580]     Audit-Id: f934b488-cd50-4dd9-b63e-4959799fd085
	I0911 11:28:56.743332  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:57.241066  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:57.241088  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:57.241096  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:57.241102  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:57.243525  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:57.243552  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:57.243564  227744 round_trippers.go:580]     Audit-Id: 54b21693-cc77-44f9-927a-546694083503
	I0911 11:28:57.243574  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:57.243584  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:57.243592  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:57.243601  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:57.243609  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:57 GMT
	I0911 11:28:57.243746  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"469","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0911 11:28:57.741353  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:57.741376  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:57.741383  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:57.741390  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:57.743793  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:57.743817  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:57.743828  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:57.743836  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:57.743844  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:57.743853  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:57 GMT
	I0911 11:28:57.743862  227744 round_trippers.go:580]     Audit-Id: 2d3c29b1-b94c-45c6-9f2c-eb393118ed82
	I0911 11:28:57.743871  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:57.744158  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:28:57.744500  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:28:58.240600  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:58.240629  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:58.240638  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:58.240644  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:58.243026  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:58.243047  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:58.243058  227744 round_trippers.go:580]     Audit-Id: eee154da-cc04-4445-81ff-7fe035a0aac3
	I0911 11:28:58.243066  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:58.243074  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:58.243081  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:58.243089  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:58.243097  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:58 GMT
	I0911 11:28:58.243214  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:28:58.740759  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:58.740780  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:58.740788  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:58.740795  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:58.743230  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:58.743250  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:58.743263  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:58.743273  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:58.743281  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:58 GMT
	I0911 11:28:58.743288  227744 round_trippers.go:580]     Audit-Id: f2919bc5-9e67-455c-91f6-0bd4430fba1c
	I0911 11:28:58.743301  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:58.743310  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:58.743413  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:28:59.241065  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:59.241087  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:59.241095  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:59.241101  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:59.243555  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:59.243576  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:59.243586  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:59.243595  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:59.243604  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:59.243612  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:59 GMT
	I0911 11:28:59.243624  227744 round_trippers.go:580]     Audit-Id: 8d13bbd8-d4b7-4ab0-9ff7-3f0eb49eeebf
	I0911 11:28:59.243633  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:59.243832  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:28:59.740356  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:28:59.740379  227744 round_trippers.go:469] Request Headers:
	I0911 11:28:59.740386  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:28:59.740394  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:28:59.742738  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:28:59.742767  227744 round_trippers.go:577] Response Headers:
	I0911 11:28:59.742778  227744 round_trippers.go:580]     Audit-Id: 6138aa44-60ba-43d9-b86d-831fef4b7b3a
	I0911 11:28:59.742788  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:28:59.742797  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:28:59.742807  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:28:59.742815  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:28:59.742827  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:28:59 GMT
	I0911 11:28:59.742956  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:00.240509  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:00.240532  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:00.240540  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:00.240547  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:00.243339  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:00.243365  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:00.243373  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:00.243379  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:00.243384  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:00.243389  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:00 GMT
	I0911 11:29:00.243395  227744 round_trippers.go:580]     Audit-Id: cb00e537-548a-414b-8673-f76ca13965ad
	I0911 11:29:00.243400  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:00.243500  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:00.243830  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:00.741257  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:00.741283  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:00.741295  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:00.741305  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:00.743550  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:00.743575  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:00.743587  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:00.743595  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:00.743604  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:00.743610  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:00.743616  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:00 GMT
	I0911 11:29:00.743625  227744 round_trippers.go:580]     Audit-Id: 9b0b20ae-498b-4af5-8796-6363b30d49d2
	I0911 11:29:00.743750  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:01.241388  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:01.241410  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:01.241418  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:01.241424  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:01.244126  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:01.244148  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:01.244159  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:01 GMT
	I0911 11:29:01.244172  227744 round_trippers.go:580]     Audit-Id: 819eba64-f420-45b7-a05b-26e0d05f599a
	I0911 11:29:01.244181  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:01.244190  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:01.244199  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:01.244207  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:01.244312  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:01.740928  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:01.743154  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:01.743169  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:01.743177  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:01.745813  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:01.745841  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:01.745852  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:01.745860  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:01.745866  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:01.745872  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:01 GMT
	I0911 11:29:01.745877  227744 round_trippers.go:580]     Audit-Id: a1858473-e92b-4b6a-9dd6-b5292f724e6a
	I0911 11:29:01.745886  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:01.745981  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:02.240902  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:02.240921  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:02.240929  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:02.240936  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:02.243244  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:02.243274  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:02.243286  227744 round_trippers.go:580]     Audit-Id: c4cfbbf6-bb1a-41ff-b324-5bbbedff7c79
	I0911 11:29:02.243295  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:02.243304  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:02.243310  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:02.243316  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:02.243325  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:02 GMT
	I0911 11:29:02.243444  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:02.741009  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:02.741036  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:02.741044  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:02.741050  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:02.743358  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:02.743384  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:02.743394  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:02.743404  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:02.743413  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:02 GMT
	I0911 11:29:02.743421  227744 round_trippers.go:580]     Audit-Id: 6407b732-3b5e-4f17-be28-6600a838102d
	I0911 11:29:02.743430  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:02.743441  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:02.743568  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:02.743910  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:03.241228  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:03.241250  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:03.241258  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:03.241264  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:03.243835  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:03.243863  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:03.243870  227744 round_trippers.go:580]     Audit-Id: 076b8290-5a4a-40d4-9511-a4d9f4da0d55
	I0911 11:29:03.243879  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:03.243884  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:03.243889  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:03.243895  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:03.243900  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:03 GMT
	I0911 11:29:03.244002  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:03.740601  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:03.740623  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:03.740631  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:03.740637  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:03.743056  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:03.743095  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:03.743103  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:03.743109  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:03.743114  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:03.743122  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:03.743131  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:03 GMT
	I0911 11:29:03.743140  227744 round_trippers.go:580]     Audit-Id: aed40c22-2f1b-445c-a74b-c8489c9edde4
	I0911 11:29:03.743255  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:04.240773  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:04.240792  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:04.240800  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:04.240808  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:04.243121  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:04.243164  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:04.243173  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:04.243179  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:04.243185  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:04.243191  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:04 GMT
	I0911 11:29:04.243196  227744 round_trippers.go:580]     Audit-Id: 4d3bff64-0da1-4291-a865-13fae26420dd
	I0911 11:29:04.243202  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:04.243319  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:04.740966  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:04.740986  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:04.740998  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:04.741004  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:04.743354  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:04.743380  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:04.743390  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:04.743399  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:04.743408  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:04.743417  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:04 GMT
	I0911 11:29:04.743428  227744 round_trippers.go:580]     Audit-Id: 5a8336ee-5c85-4625-94b2-2a1e08b97318
	I0911 11:29:04.743437  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:04.743539  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:05.241208  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:05.241236  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:05.241248  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:05.241255  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:05.243660  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:05.243684  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:05.243695  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:05 GMT
	I0911 11:29:05.243705  227744 round_trippers.go:580]     Audit-Id: c81bd4fc-0721-4f8c-b601-bd9862c3cc91
	I0911 11:29:05.243714  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:05.243721  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:05.243727  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:05.243734  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:05.243875  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:05.244276  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:05.741379  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:05.741400  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:05.741408  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:05.741414  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:05.743683  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:05.743703  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:05.743710  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:05.743717  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:05.743725  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:05.743734  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:05 GMT
	I0911 11:29:05.743744  227744 round_trippers.go:580]     Audit-Id: b637ba66-7f48-430e-95b2-fe436624def8
	I0911 11:29:05.743753  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:05.743880  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:06.240361  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:06.240382  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:06.240390  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:06.240395  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:06.242632  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:06.242655  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:06.242665  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:06.242672  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:06.242680  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:06.242689  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:06.242697  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:06 GMT
	I0911 11:29:06.242706  227744 round_trippers.go:580]     Audit-Id: 8941def7-6fc1-4a7c-91a1-cb66019227a1
	I0911 11:29:06.242827  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:06.740667  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:06.740693  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:06.740706  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:06.740713  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:06.743017  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:06.743038  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:06.743046  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:06.743053  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:06 GMT
	I0911 11:29:06.743058  227744 round_trippers.go:580]     Audit-Id: 4c4b7469-7857-412a-894f-566055c437f4
	I0911 11:29:06.743064  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:06.743070  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:06.743075  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:06.743228  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:07.240749  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:07.240770  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:07.240778  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:07.240785  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:07.242908  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:07.242930  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:07.242938  227744 round_trippers.go:580]     Audit-Id: a58a7695-c5ce-4e98-b166-7a8c9a95694c
	I0911 11:29:07.242944  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:07.242950  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:07.242956  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:07.242963  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:07.242971  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:07 GMT
	I0911 11:29:07.243095  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:07.740752  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:07.740772  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:07.740780  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:07.740786  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:07.743322  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:07.743342  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:07.743348  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:07 GMT
	I0911 11:29:07.743357  227744 round_trippers.go:580]     Audit-Id: 6fa8a3ba-5dd8-44ec-af7e-e8a0c19764cb
	I0911 11:29:07.743366  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:07.743374  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:07.743382  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:07.743394  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:07.743530  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:07.743909  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:08.241181  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:08.241205  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:08.241213  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:08.241219  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:08.243647  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:08.243672  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:08.243680  227744 round_trippers.go:580]     Audit-Id: af357857-5af9-4894-982d-aa695f71a4cd
	I0911 11:29:08.243686  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:08.243691  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:08.243696  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:08.243702  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:08.243710  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:08 GMT
	I0911 11:29:08.243790  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:08.741413  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:08.741438  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:08.741445  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:08.741452  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:08.743802  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:08.743823  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:08.743833  227744 round_trippers.go:580]     Audit-Id: a7fadd88-62f0-4203-902f-61332a5f5e2f
	I0911 11:29:08.743843  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:08.743853  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:08.743862  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:08.743870  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:08.743877  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:08 GMT
	I0911 11:29:08.744009  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:09.240620  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:09.240641  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:09.240650  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:09.240656  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:09.243273  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:09.243298  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:09.243309  227744 round_trippers.go:580]     Audit-Id: cd8a212b-1032-43e0-a54d-15f7858dd7d9
	I0911 11:29:09.243319  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:09.243327  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:09.243335  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:09.243346  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:09.243356  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:09 GMT
	I0911 11:29:09.243454  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:09.741059  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:09.741088  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:09.741099  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:09.741107  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:09.743573  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:09.743599  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:09.743610  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:09 GMT
	I0911 11:29:09.743620  227744 round_trippers.go:580]     Audit-Id: 2672ca16-29dc-4b5f-8f4d-d85ac12e8229
	I0911 11:29:09.743630  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:09.743639  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:09.743647  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:09.743657  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:09.743777  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:09.744184  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:10.240362  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:10.240388  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:10.240398  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:10.240407  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:10.242684  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:10.242706  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:10.242713  227744 round_trippers.go:580]     Audit-Id: ddb66d2e-5c9c-410a-8af0-34fb204d8277
	I0911 11:29:10.242719  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:10.242729  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:10.242738  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:10.242748  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:10.242758  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:10 GMT
	I0911 11:29:10.242855  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:10.740441  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:10.740466  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:10.740479  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:10.740488  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:10.742968  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:10.742996  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:10.743006  227744 round_trippers.go:580]     Audit-Id: 7ef1a580-4295-4d7e-bccd-0d0385d0d86e
	I0911 11:29:10.743015  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:10.743025  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:10.743038  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:10.743052  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:10.743065  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:10 GMT
	I0911 11:29:10.743219  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:11.240649  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:11.240669  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:11.240677  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:11.240684  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:11.243066  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:11.243084  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:11.243091  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:11.243096  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:11 GMT
	I0911 11:29:11.243104  227744 round_trippers.go:580]     Audit-Id: a1e3af33-1251-43d3-b9fc-a66af7b0297c
	I0911 11:29:11.243112  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:11.243129  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:11.243138  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:11.243274  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:11.740863  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:11.742975  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:11.742989  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:11.742996  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:11.745408  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:11.745434  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:11.745445  227744 round_trippers.go:580]     Audit-Id: 43f22703-b4ae-492c-b0f0-83e99d6086c3
	I0911 11:29:11.745454  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:11.745462  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:11.745474  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:11.745487  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:11.745496  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:11 GMT
	I0911 11:29:11.745637  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:11.746036  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:12.240323  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:12.240343  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:12.240350  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:12.240357  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:12.242634  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:12.242654  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:12.242671  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:12.242690  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:12 GMT
	I0911 11:29:12.242704  227744 round_trippers.go:580]     Audit-Id: 83a14ebe-76b0-462f-96e7-701fc81faba5
	I0911 11:29:12.242710  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:12.242716  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:12.242722  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:12.242821  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:12.741419  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:12.741444  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:12.741457  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:12.741467  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:12.743846  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:12.743865  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:12.743873  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:12.743880  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:12.743890  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:12 GMT
	I0911 11:29:12.743901  227744 round_trippers.go:580]     Audit-Id: dac4520d-c58d-450e-bf65-9c0a77371833
	I0911 11:29:12.743910  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:12.743918  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:12.744046  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:13.240382  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:13.240401  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:13.240409  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:13.240415  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:13.242748  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:13.242777  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:13.242788  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:13 GMT
	I0911 11:29:13.242797  227744 round_trippers.go:580]     Audit-Id: 6dce785d-0267-47a0-8435-5a80ec183693
	I0911 11:29:13.242805  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:13.242819  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:13.242835  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:13.242845  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:13.242947  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:13.740558  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:13.740581  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:13.740589  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:13.740601  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:13.742929  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:13.742955  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:13.742966  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:13.742975  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:13.742980  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:13 GMT
	I0911 11:29:13.742986  227744 round_trippers.go:580]     Audit-Id: dfbcd5ce-faa8-4195-ae5f-7fab00b91aac
	I0911 11:29:13.742991  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:13.742997  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:13.743109  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:14.240601  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:14.240625  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:14.240634  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:14.240640  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:14.243118  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:14.243143  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:14.243153  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:14.243163  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:14 GMT
	I0911 11:29:14.243172  227744 round_trippers.go:580]     Audit-Id: be199cf4-f256-4090-bc93-ce69a50ef0cb
	I0911 11:29:14.243180  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:14.243186  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:14.243192  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:14.243272  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:14.243553  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:14.740840  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:14.740860  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:14.740868  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:14.740874  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:14.743210  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:14.743236  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:14.743247  227744 round_trippers.go:580]     Audit-Id: 5ab2b8bc-ff80-4943-88d5-9f486ce48cfb
	I0911 11:29:14.743256  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:14.743264  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:14.743277  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:14.743283  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:14.743292  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:14 GMT
	I0911 11:29:14.743399  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:15.241051  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:15.241070  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:15.241078  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:15.241085  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:15.243473  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:15.243503  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:15.243514  227744 round_trippers.go:580]     Audit-Id: 357a0771-9f59-4d22-a608-e963df7bafb6
	I0911 11:29:15.243526  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:15.243535  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:15.243544  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:15.243552  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:15.243559  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:15 GMT
	I0911 11:29:15.243666  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:15.741368  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:15.741392  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:15.741402  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:15.741409  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:15.743790  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:15.743809  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:15.743816  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:15.743822  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:15.743827  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:15.743833  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:15 GMT
	I0911 11:29:15.743838  227744 round_trippers.go:580]     Audit-Id: 35e0b441-3c22-4453-9665-14bc55383e33
	I0911 11:29:15.743844  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:15.743999  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:16.240477  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:16.240496  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:16.240504  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:16.240510  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:16.242910  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:16.242942  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:16.242953  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:16.242962  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:16.242971  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:16 GMT
	I0911 11:29:16.242984  227744 round_trippers.go:580]     Audit-Id: 99c142bc-bac7-4616-be7a-b1d1b5f32ec0
	I0911 11:29:16.242997  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:16.243007  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:16.243106  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:16.740847  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:16.742945  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:16.742959  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:16.742966  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:16.745179  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:16.745199  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:16.745208  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:16.745215  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:16 GMT
	I0911 11:29:16.745222  227744 round_trippers.go:580]     Audit-Id: f01b82d8-431c-4fd4-8aef-e8d2dc2a03f4
	I0911 11:29:16.745230  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:16.745239  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:16.745249  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:16.745371  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:16.745711  227744 node_ready.go:58] node "multinode-517978-m02" has status "Ready":"False"
	I0911 11:29:17.241005  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:17.241025  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:17.241032  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:17.241040  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:17.243268  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:17.243286  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:17.243293  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:17.243299  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:17.243304  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:17 GMT
	I0911 11:29:17.243310  227744 round_trippers.go:580]     Audit-Id: a2462b0f-9a30-4df9-94e6-866c20bb4af2
	I0911 11:29:17.243318  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:17.243337  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:17.243426  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:17.741046  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:17.741067  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:17.741078  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:17.741085  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:17.743616  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:17.743646  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:17.743659  227744 round_trippers.go:580]     Audit-Id: 6e7ddeac-5812-4712-b088-f013cf1e0f26
	I0911 11:29:17.743669  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:17.743678  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:17.743687  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:17.743698  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:17.743707  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:17 GMT
	I0911 11:29:17.743837  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:18.240378  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:18.240398  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:18.240407  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:18.240413  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:18.242749  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:18.242770  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:18.242777  227744 round_trippers.go:580]     Audit-Id: 8d7fedad-e8c2-4f06-a011-7cf17a00ed28
	I0911 11:29:18.242783  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:18.242793  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:18.242802  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:18.242811  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:18.242819  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:18 GMT
	I0911 11:29:18.242933  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:18.740457  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:18.740482  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:18.740490  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:18.740497  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:18.742952  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:18.742979  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:18.742991  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:18.743001  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:18 GMT
	I0911 11:29:18.743010  227744 round_trippers.go:580]     Audit-Id: 43f0c5e9-6895-4fdf-8c93-5f582437c67e
	I0911 11:29:18.743024  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:18.743032  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:18.743038  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:18.743198  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"474","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0911 11:29:19.240664  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:19.240688  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.240697  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.240703  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.243646  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.243679  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.243691  227744 round_trippers.go:580]     Audit-Id: 44fe63b4-914e-4c32-9d10-6223d445d56e
	I0911 11:29:19.243701  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.243711  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.243719  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.243726  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.243731  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.243818  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"498","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0911 11:29:19.244197  227744 node_ready.go:49] node "multinode-517978-m02" has status "Ready":"True"
	I0911 11:29:19.244214  227744 node_ready.go:38] duration metric: took 31.509465803s waiting for node "multinode-517978-m02" to be "Ready" ...
	I0911 11:29:19.244223  227744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:29:19.244302  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0911 11:29:19.244311  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.244318  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.244324  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.248137  227744 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:19.248165  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.248176  227744 round_trippers.go:580]     Audit-Id: a0c76347-d81c-4dcd-b525-61911c99268c
	I0911 11:29:19.248185  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.248195  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.248204  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.248212  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.248219  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.248683  227744 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"413","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0911 11:29:19.250815  227744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.250882  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lmlsc
	I0911 11:29:19.250890  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.250898  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.250904  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.252927  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.252948  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.252958  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.252967  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.252979  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.252988  227744 round_trippers.go:580]     Audit-Id: 33192d51-8816-47de-ba91-7cb19742ad2c
	I0911 11:29:19.253000  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.253008  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.253123  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lmlsc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b64f2269-78cb-4e36-a2a7-e1818a2b093b","resourceVersion":"413","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"16860251-bd9c-478c-9785-957df249c5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16860251-bd9c-478c-9785-957df249c5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0911 11:29:19.253563  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:19.253577  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.253584  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.253591  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.255470  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.255486  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.255493  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.255499  227744 round_trippers.go:580]     Audit-Id: 3249fc1c-3f0d-40b1-a9ba-d32ebd46b608
	I0911 11:29:19.255504  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.255510  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.255515  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.255520  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.255630  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:19.255921  227744 pod_ready.go:92] pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:19.255936  227744 pod_ready.go:81] duration metric: took 5.101256ms waiting for pod "coredns-5dd5756b68-lmlsc" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.255944  227744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.255993  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517978
	I0911 11:29:19.256000  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.256007  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.256013  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.257777  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.257792  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.257799  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.257805  227744 round_trippers.go:580]     Audit-Id: 35d0a2f7-713f-453e-9636-279002e1e0e1
	I0911 11:29:19.257811  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.257817  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.257825  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.257834  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.257970  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517978","namespace":"kube-system","uid":"e8ee6b0b-aa4d-4315-8ce1-13e67c030138","resourceVersion":"366","creationTimestamp":"2023-09-11T11:27:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0372ec0a10a9e8ac933ccf1ab6d3e37f","kubernetes.io/config.mirror":"0372ec0a10a9e8ac933ccf1ab6d3e37f","kubernetes.io/config.seen":"2023-09-11T11:27:48.688259211Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0911 11:29:19.258469  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:19.258488  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.258495  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.258501  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.260393  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.260409  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.260419  227744 round_trippers.go:580]     Audit-Id: 7ad66ea1-9dd2-4103-b42f-dd895bf82602
	I0911 11:29:19.260427  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.260441  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.260450  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.260460  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.260466  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.260556  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:19.260828  227744 pod_ready.go:92] pod "etcd-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:19.260841  227744 pod_ready.go:81] duration metric: took 4.891913ms waiting for pod "etcd-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.260855  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.260900  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517978
	I0911 11:29:19.260908  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.260914  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.260920  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.262689  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.262710  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.262718  227744 round_trippers.go:580]     Audit-Id: 5ebcfd06-fba5-4f96-9c83-c951aaa946eb
	I0911 11:29:19.262724  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.262729  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.262737  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.262745  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.262761  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.262899  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517978","namespace":"kube-system","uid":"9dc7326e-a6f6-4477-9175-5db6d08e3c2d","resourceVersion":"383","creationTimestamp":"2023-09-11T11:27:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ad1dd79f381ff90e532fcfdde7e87da6","kubernetes.io/config.mirror":"ad1dd79f381ff90e532fcfdde7e87da6","kubernetes.io/config.seen":"2023-09-11T11:27:42.825396082Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0911 11:29:19.263269  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:19.263280  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.263286  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.263292  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.265008  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.265031  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.265042  227744 round_trippers.go:580]     Audit-Id: 8e7fa033-cdcd-4bf6-9f57-8d10cb7353bc
	I0911 11:29:19.265057  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.265063  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.265068  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.265074  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.265079  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.265165  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:19.265455  227744 pod_ready.go:92] pod "kube-apiserver-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:19.265469  227744 pod_ready.go:81] duration metric: took 4.608355ms waiting for pod "kube-apiserver-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.265478  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.265520  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517978
	I0911 11:29:19.265527  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.265533  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.265539  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.267559  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.267578  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.267585  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.267591  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.267596  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.267603  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.267609  227744 round_trippers.go:580]     Audit-Id: 68d55b12-dd22-47cc-bb27-8780a4a3ec3e
	I0911 11:29:19.267614  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.267749  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517978","namespace":"kube-system","uid":"0ed00710-145d-4aad-91c2-df770397db59","resourceVersion":"384","creationTimestamp":"2023-09-11T11:27:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d8452f232ba425055b35eb6d6a7e4f2","kubernetes.io/config.mirror":"6d8452f232ba425055b35eb6d6a7e4f2","kubernetes.io/config.seen":"2023-09-11T11:27:48.688264911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0911 11:29:19.268245  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:19.268262  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.268273  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.268308  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.270125  227744 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:29:19.270147  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.270158  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.270167  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.270179  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.270193  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.270206  227744 round_trippers.go:580]     Audit-Id: 471627b1-c8b7-4ba2-a435-f8d1a67f3b04
	I0911 11:29:19.270215  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.270322  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:19.270629  227744 pod_ready.go:92] pod "kube-controller-manager-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:19.270646  227744 pod_ready.go:81] duration metric: took 5.162698ms waiting for pod "kube-controller-manager-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.270655  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bn2kk" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.441045  227744 request.go:629] Waited for 170.314415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bn2kk
	I0911 11:29:19.441116  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bn2kk
	I0911 11:29:19.441121  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.441129  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.441136  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.443934  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.443954  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.443961  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.443967  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.443972  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.443977  227744 round_trippers.go:580]     Audit-Id: 1c1cd7b4-f59d-4fbc-97a8-843ee62ed94a
	I0911 11:29:19.443983  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.443988  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.444109  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bn2kk","generateName":"kube-proxy-","namespace":"kube-system","uid":"6568d7ec-0e79-4d62-982e-1775db93730a","resourceVersion":"462","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d613dcb2-6db5-48c2-9ef6-def50c5b18eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d613dcb2-6db5-48c2-9ef6-def50c5b18eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:29:19.640849  227744 request.go:629] Waited for 196.303707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:19.640906  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978-m02
	I0911 11:29:19.640910  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.640918  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.640924  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.643443  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.643471  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.643480  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.643486  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.643491  227744 round_trippers.go:580]     Audit-Id: cdabc838-8393-4463-869d-90b81da8e947
	I0911 11:29:19.643496  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.643502  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.643507  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.643613  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978-m02","uid":"8dbce3af-f85b-4f9e-ac53-8e9a9bb013b2","resourceVersion":"498","creationTimestamp":"2023-09-11T11:28:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0911 11:29:19.643925  227744 pod_ready.go:92] pod "kube-proxy-bn2kk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:19.643940  227744 pod_ready.go:81] duration metric: took 373.279932ms waiting for pod "kube-proxy-bn2kk" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.643950  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8g9f" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:19.841399  227744 request.go:629] Waited for 197.383099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8g9f
	I0911 11:29:19.841486  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8g9f
	I0911 11:29:19.841507  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:19.841517  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:19.841530  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:19.844221  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:19.844254  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:19.844267  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:19.844276  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:19.844285  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:19.844293  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:19.844302  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:19 GMT
	I0911 11:29:19.844315  227744 round_trippers.go:580]     Audit-Id: 528b7d08-b4fd-43b0-bc7e-bf381624a455
	I0911 11:29:19.844492  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s8g9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"68f14c0f-00e4-4014-9613-36142d843e61","resourceVersion":"371","creationTimestamp":"2023-09-11T11:28:01Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d613dcb2-6db5-48c2-9ef6-def50c5b18eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:28:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d613dcb2-6db5-48c2-9ef6-def50c5b18eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0911 11:29:20.041393  227744 request.go:629] Waited for 196.3523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:20.041475  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:20.041486  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:20.041499  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:20.041513  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:20.044033  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:20.044056  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:20.044065  227744 round_trippers.go:580]     Audit-Id: 145fd711-d92c-4cc7-bd4f-63012d1eee29
	I0911 11:29:20.044074  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:20.044089  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:20.044098  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:20.044110  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:20.044124  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:20 GMT
	I0911 11:29:20.044252  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:20.044582  227744 pod_ready.go:92] pod "kube-proxy-s8g9f" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:20.044599  227744 pod_ready.go:81] duration metric: took 400.642718ms waiting for pod "kube-proxy-s8g9f" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:20.044614  227744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:20.241063  227744 request.go:629] Waited for 196.351472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517978
	I0911 11:29:20.241122  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517978
	I0911 11:29:20.241126  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:20.241135  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:20.241142  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:20.243399  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:20.243423  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:20.243433  227744 round_trippers.go:580]     Audit-Id: f737348a-db04-40ef-a994-fc1866cf93a0
	I0911 11:29:20.243443  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:20.243452  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:20.243460  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:20.243468  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:20.243480  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:20 GMT
	I0911 11:29:20.243597  227744 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517978","namespace":"kube-system","uid":"d30acde8-4c9a-4857-b218-979934d9d41d","resourceVersion":"365","creationTimestamp":"2023-09-11T11:27:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b68774f5eef9a19c580916204e8da67e","kubernetes.io/config.mirror":"b68774f5eef9a19c580916204e8da67e","kubernetes.io/config.seen":"2023-09-11T11:27:42.825392745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:27:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0911 11:29:20.441461  227744 request.go:629] Waited for 197.407997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:20.441517  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-517978
	I0911 11:29:20.441522  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:20.441529  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:20.441535  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:20.443896  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:20.443916  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:20.443926  227744 round_trippers.go:580]     Audit-Id: 51a905b5-5c29-4ccd-8a8c-d9d24f21ea18
	I0911 11:29:20.443935  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:20.443943  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:20.443951  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:20.443961  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:20.443971  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:20 GMT
	I0911 11:29:20.444054  227744 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:27:45Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0911 11:29:20.444374  227744 pod_ready.go:92] pod "kube-scheduler-multinode-517978" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:20.444388  227744 pod_ready.go:81] duration metric: took 399.768367ms waiting for pod "kube-scheduler-multinode-517978" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:20.444399  227744 pod_ready.go:38] duration metric: took 1.200168307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:29:20.444412  227744 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:29:20.444456  227744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:29:20.455069  227744 system_svc.go:56] duration metric: took 10.646005ms WaitForService to wait for kubelet.
	I0911 11:29:20.455111  227744 kubeadm.go:581] duration metric: took 32.735102933s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:29:20.455142  227744 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:29:20.641564  227744 request.go:629] Waited for 186.341608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0911 11:29:20.641637  227744 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0911 11:29:20.641642  227744 round_trippers.go:469] Request Headers:
	I0911 11:29:20.641649  227744 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:20.641656  227744 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:20.644037  227744 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:20.644063  227744 round_trippers.go:577] Response Headers:
	I0911 11:29:20.644074  227744 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6916f386-fd63-467f-a097-70a73a8e596f
	I0911 11:29:20.644083  227744 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:20 GMT
	I0911 11:29:20.644092  227744 round_trippers.go:580]     Audit-Id: a9ceb1c7-3538-40fc-8c53-3353cbe26cde
	I0911 11:29:20.644100  227744 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:20.644105  227744 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:20.644111  227744 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 392e76a2-98f4-434f-9f7d-de409a894645
	I0911 11:29:20.644289  227744 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"500"},"items":[{"metadata":{"name":"multinode-517978","uid":"731ec007-5e9d-4124-a20e-4493363fd833","resourceVersion":"394","creationTimestamp":"2023-09-11T11:27:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517978","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-517978","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_27_49_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0911 11:29:20.644778  227744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0911 11:29:20.644790  227744 node_conditions.go:123] node cpu capacity is 8
	I0911 11:29:20.644799  227744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0911 11:29:20.644803  227744 node_conditions.go:123] node cpu capacity is 8
	I0911 11:29:20.644807  227744 node_conditions.go:105] duration metric: took 189.660286ms to run NodePressure ...
	I0911 11:29:20.644818  227744 start.go:228] waiting for startup goroutines ...
	I0911 11:29:20.644845  227744 start.go:242] writing updated cluster config ...
	I0911 11:29:20.645149  227744 ssh_runner.go:195] Run: rm -f paused
	I0911 11:29:20.691758  227744 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:29:20.694555  227744 out.go:177] * Done! kubectl is now configured to use "multinode-517978" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 11 11:28:33 multinode-517978 crio[957]: time="2023-09-11 11:28:33.382308936Z" level=info msg="Starting container: e3147e0fa0d12652104d8c07635f9099f7aa82332e04b7a8d3819fd535175d4d" id=301ac32d-7423-4662-94dd-ecdad19add6e name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:28:33 multinode-517978 crio[957]: time="2023-09-11 11:28:33.382982146Z" level=info msg="Created container 3db73cd2f040a57fd4bbd1e91206d35e9b3b8a6b91f92e131fc0266658cf3412: kube-system/coredns-5dd5756b68-lmlsc/coredns" id=3d6712bf-3d89-486b-a278-9e539fddcd69 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:28:33 multinode-517978 crio[957]: time="2023-09-11 11:28:33.383500608Z" level=info msg="Starting container: 3db73cd2f040a57fd4bbd1e91206d35e9b3b8a6b91f92e131fc0266658cf3412" id=8f9559fa-3628-46a4-8f03-0ed6237a6fa1 name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:28:33 multinode-517978 crio[957]: time="2023-09-11 11:28:33.391913548Z" level=info msg="Started container" PID=2352 containerID=e3147e0fa0d12652104d8c07635f9099f7aa82332e04b7a8d3819fd535175d4d description=kube-system/storage-provisioner/storage-provisioner id=301ac32d-7423-4662-94dd-ecdad19add6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f765bcd9d83c787abb4cd3bef27fc70ebc7e55ec2528f06898ad3f3ba006e31f
	Sep 11 11:28:33 multinode-517978 crio[957]: time="2023-09-11 11:28:33.392437504Z" level=info msg="Started container" PID=2354 containerID=3db73cd2f040a57fd4bbd1e91206d35e9b3b8a6b91f92e131fc0266658cf3412 description=kube-system/coredns-5dd5756b68-lmlsc/coredns id=8f9559fa-3628-46a4-8f03-0ed6237a6fa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb88cdffe5df6e94bac7b945845c69f28a777783e530e1b4b314f462a73a69a2
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.676697371Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-qrkdr/POD" id=11812f08-ad94-4f06-98f8-00746073a2e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.676763479Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.691185751Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-qrkdr Namespace:default ID:e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93 UID:69894672-42f8-46b3-9c22-e5b2f88a7734 NetNS:/var/run/netns/e08862e1-ef67-44ed-ad17-fdb240dad820 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.691237039Z" level=info msg="Adding pod default_busybox-5bc68d56bd-qrkdr to CNI network \"kindnet\" (type=ptp)"
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.699786804Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-qrkdr Namespace:default ID:e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93 UID:69894672-42f8-46b3-9c22-e5b2f88a7734 NetNS:/var/run/netns/e08862e1-ef67-44ed-ad17-fdb240dad820 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.699898886Z" level=info msg="Checking pod default_busybox-5bc68d56bd-qrkdr for CNI network kindnet (type=ptp)"
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.721338904Z" level=info msg="Ran pod sandbox e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93 with infra container: default/busybox-5bc68d56bd-qrkdr/POD" id=11812f08-ad94-4f06-98f8-00746073a2e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.722387573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ac02af61-a8ae-42dd-91ce-b0e0f5b1cf69 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.722649300Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=ac02af61-a8ae-42dd-91ce-b0e0f5b1cf69 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.723908581Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=dff950fb-dbf5-4822-8d2e-ecfd094ae1e6 name=/runtime.v1.ImageService/PullImage
	Sep 11 11:29:21 multinode-517978 crio[957]: time="2023-09-11 11:29:21.727094646Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 11 11:29:22 multinode-517978 crio[957]: time="2023-09-11 11:29:22.265255091Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.277276367Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=dff950fb-dbf5-4822-8d2e-ecfd094ae1e6 name=/runtime.v1.ImageService/PullImage
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.278156758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=96f587e9-6592-4de6-9254-e4990e7a4aba name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.278728663Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=96f587e9-6592-4de6-9254-e4990e7a4aba name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.279474814Z" level=info msg="Creating container: default/busybox-5bc68d56bd-qrkdr/busybox" id=7c75ad6d-4b88-4404-99d4-c58bc23275de name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.279563952Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.346688758Z" level=info msg="Created container ad48a3c770dca41e006ecdf9bd29b5c0f02350215eb71b070dd726bd75260cb7: default/busybox-5bc68d56bd-qrkdr/busybox" id=7c75ad6d-4b88-4404-99d4-c58bc23275de name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.347281513Z" level=info msg="Starting container: ad48a3c770dca41e006ecdf9bd29b5c0f02350215eb71b070dd726bd75260cb7" id=58345584-e50a-41f5-843e-80497953187f name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:29:23 multinode-517978 crio[957]: time="2023-09-11 11:29:23.355649286Z" level=info msg="Started container" PID=2535 containerID=ad48a3c770dca41e006ecdf9bd29b5c0f02350215eb71b070dd726bd75260cb7 description=default/busybox-5bc68d56bd-qrkdr/busybox id=58345584-e50a-41f5-843e-80497953187f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ad48a3c770dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   e82373e32f00f       busybox-5bc68d56bd-qrkdr
	3db73cd2f040a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      54 seconds ago       Running             coredns                   0                   eb88cdffe5df6       coredns-5dd5756b68-lmlsc
	e3147e0fa0d12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       0                   f765bcd9d83c7       storage-provisioner
	af9c458a401e2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      About a minute ago   Running             kube-proxy                0                   e1f0e2c82cf3b       kube-proxy-s8g9f
	d53155ebbdb16       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      About a minute ago   Running             kindnet-cni               0                   50cbdbe4b2252       kindnet-4qgdc
	1af18db79efe9       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   261e7a2988d8f       kube-apiserver-multinode-517978
	0a7f53d516d99       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      About a minute ago   Running             kube-controller-manager   0                   17d6ba41694aa       kube-controller-manager-multinode-517978
	a3214737a04ce       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   c06cbc6087217       kube-scheduler-multinode-517978
	412793bbdf094       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   79e7e84854e93       etcd-multinode-517978
	
	* 
	* ==> coredns [3db73cd2f040a57fd4bbd1e91206d35e9b3b8a6b91f92e131fc0266658cf3412] <==
	* [INFO] 10.244.1.2:37856 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110283s
	[INFO] 10.244.0.3:34932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097704s
	[INFO] 10.244.0.3:41217 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001556629s
	[INFO] 10.244.0.3:51647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051346s
	[INFO] 10.244.0.3:43268 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064179s
	[INFO] 10.244.0.3:51921 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001139602s
	[INFO] 10.244.0.3:58769 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060262s
	[INFO] 10.244.0.3:50049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056098s
	[INFO] 10.244.0.3:43010 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048626s
	[INFO] 10.244.1.2:60906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120352s
	[INFO] 10.244.1.2:43456 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080737s
	[INFO] 10.244.1.2:49091 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053039s
	[INFO] 10.244.1.2:44484 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062191s
	[INFO] 10.244.0.3:45674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119108s
	[INFO] 10.244.0.3:38454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092304s
	[INFO] 10.244.0.3:42458 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061347s
	[INFO] 10.244.0.3:42083 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052647s
	[INFO] 10.244.1.2:39020 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121562s
	[INFO] 10.244.1.2:51548 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011189s
	[INFO] 10.244.1.2:50626 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125907s
	[INFO] 10.244.1.2:50229 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084987s
	[INFO] 10.244.0.3:46370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099989s
	[INFO] 10.244.0.3:56852 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070785s
	[INFO] 10.244.0.3:43639 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000053133s
	[INFO] 10.244.0.3:46333 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056982s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-517978
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517978
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=multinode-517978
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_27_49_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:27:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517978
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:29:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:28:32 +0000   Mon, 11 Sep 2023 11:27:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:28:32 +0000   Mon, 11 Sep 2023 11:27:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:28:32 +0000   Mon, 11 Sep 2023 11:27:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:28:32 +0000   Mon, 11 Sep 2023 11:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-517978
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 5249118c23894c5b83bcd5d02c96b992
	  System UUID:                f8ae0157-fafc-4761-98ec-2294bc8fbe59
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qrkdr                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-lmlsc                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     86s
	  kube-system                 etcd-multinode-517978                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-4qgdc                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-multinode-517978             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-multinode-517978    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-s8g9f                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-multinode-517978             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 85s   kube-proxy       
	  Normal  Starting                 99s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s   kubelet          Node multinode-517978 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s   kubelet          Node multinode-517978 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s   kubelet          Node multinode-517978 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           86s   node-controller  Node multinode-517978 event: Registered Node multinode-517978 in Controller
	  Normal  NodeReady                55s   kubelet          Node multinode-517978 status is now: NodeReady
	
	
	Name:               multinode-517978-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517978-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:28:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517978-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:29:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:29:18 +0000   Mon, 11 Sep 2023 11:28:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:29:18 +0000   Mon, 11 Sep 2023 11:28:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:29:18 +0000   Mon, 11 Sep 2023 11:28:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:29:18 +0000   Mon, 11 Sep 2023 11:29:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-517978-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d998e2bea349aea0fc54a105d1fff8
	  System UUID:                b354f0ed-7942-4b27-9020-6376f0d8c8da
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-l4r9c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-65nwg               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-proxy-bn2kk            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x5 over 42s)  kubelet          Node multinode-517978-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x5 over 42s)  kubelet          Node multinode-517978-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x5 over 42s)  kubelet          Node multinode-517978-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node multinode-517978-m02 event: Registered Node multinode-517978-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-517978-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=0000000025153437{9p.inode} n=0000000009a7faf7
	[  +0.007362] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.279006] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006749] FS-Cache: O-cookie d=0000000025153437{9p.inode} n=00000000114ccba9
	[  +0.007360] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004933] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006586] FS-Cache: N-cookie d=0000000025153437{9p.inode} n=000000003d6afd37
	[  +0.007368] FS-Cache: N-key=[8] '0690130200000000'
	[Sep11 11:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep11 11:19] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +1.028025] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +2.015874] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[Sep11 11:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[  +8.187363] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[ +16.126692] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	[ +33.789234] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da ce 4b d6 5e 85 42 c2 f8 23 8b 34 08 00
	
	* 
	* ==> etcd [412793bbdf094fadc6c8cc0fbff40067c3862d6f314152590e5e91ca0ac459a5] <==
	* {"level":"info","ts":"2023-09-11T11:27:43.593584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-11T11:27:43.59371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-11T11:27:43.594083Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:27:43.594222Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-11T11:27:43.594288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-11T11:27:43.594678Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:27:43.594706Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:27:43.682228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T11:27:43.682274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T11:27:43.682303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-11T11:27:43.68232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:27:43.682329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-11T11:27:43.68234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-11T11:27:43.682352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-11T11:27:43.68307Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:27:43.683759Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-517978 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:27:43.683801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:27:43.683831Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:27:43.684035Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:27:43.684132Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:27:43.684161Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:27:43.684153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:27:43.684212Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:27:43.685128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T11:27:43.685181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  11:29:27 up  1:11,  0 users,  load average: 0.52, 1.15, 1.41
	Linux multinode-517978 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [d53155ebbdb16076351cd58834bbe9d1c0c64f95ecd392a8752727b9febf1064] <==
	* I0911 11:28:02.065392       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:28:02.065408       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:28:02.065440       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:28:32.387864       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0911 11:28:32.395345       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:28:32.395375       1 main.go:227] handling current node
	I0911 11:28:42.409413       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:28:42.409439       1 main.go:227] handling current node
	I0911 11:28:52.421951       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:28:52.421979       1 main.go:227] handling current node
	I0911 11:28:52.421992       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0911 11:28:52.421998       1 main.go:250] Node multinode-517978-m02 has CIDR [10.244.1.0/24] 
	I0911 11:28:52.422207       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0911 11:29:02.426566       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:29:02.426591       1 main.go:227] handling current node
	I0911 11:29:02.426603       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0911 11:29:02.426609       1 main.go:250] Node multinode-517978-m02 has CIDR [10.244.1.0/24] 
	I0911 11:29:12.438426       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:29:12.438456       1 main.go:227] handling current node
	I0911 11:29:12.438474       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0911 11:29:12.438487       1 main.go:250] Node multinode-517978-m02 has CIDR [10.244.1.0/24] 
	I0911 11:29:22.442734       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0911 11:29:22.442760       1 main.go:227] handling current node
	I0911 11:29:22.442770       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0911 11:29:22.442774       1 main.go:250] Node multinode-517978-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [1af18db79efe940ada9f3a8f36848b61b1b23ecf891c17ded868ff0de28a4905] <==
	* I0911 11:27:45.768343       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:27:45.768400       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:27:45.768492       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:27:45.768731       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:27:45.768763       1 controller.go:624] quota admission added evaluator for: namespaces
	I0911 11:27:45.769081       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:27:45.769828       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:27:45.769851       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:27:45.769865       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:27:45.863464       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:27:46.607830       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0911 11:27:46.611206       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:27:46.611222       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:27:46.987703       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:27:47.019815       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:27:47.083955       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0911 11:27:47.089260       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0911 11:27:47.090273       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 11:27:47.094061       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:27:47.674204       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:27:48.636326       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:27:48.647712       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0911 11:27:48.656490       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 11:28:01.467778       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0911 11:28:01.471333       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [0a7f53d516d99ef98415425c01e9e19af6db9183230ce9b11a51972625dbc83e] <==
	* I0911 11:28:32.980025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.848µs"
	I0911 11:28:32.998478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.969µs"
	I0911 11:28:33.860996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.335µs"
	I0911 11:28:33.894476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.931023ms"
	I0911 11:28:33.894772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.93µs"
	I0911 11:28:36.431432       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0911 11:28:47.147888       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-517978-m02\" does not exist"
	I0911 11:28:47.155953       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-517978-m02" podCIDRs=["10.244.1.0/24"]
	I0911 11:28:47.158298       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bn2kk"
	I0911 11:28:47.158402       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-65nwg"
	I0911 11:28:51.433043       1 event.go:307] "Event occurred" object="multinode-517978-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-517978-m02 event: Registered Node multinode-517978-m02 in Controller"
	I0911 11:28:51.433141       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-517978-m02"
	I0911 11:29:18.955527       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-517978-m02"
	I0911 11:29:21.355768       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0911 11:29:21.364581       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-l4r9c"
	I0911 11:29:21.368020       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qrkdr"
	I0911 11:29:21.374938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.413854ms"
	I0911 11:29:21.380871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.87958ms"
	I0911 11:29:21.394021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.096173ms"
	I0911 11:29:21.394139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.815µs"
	I0911 11:29:21.445704       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-l4r9c" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-l4r9c"
	I0911 11:29:23.741859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.714935ms"
	I0911 11:29:23.741957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.501µs"
	I0911 11:29:23.949024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.802129ms"
	I0911 11:29:23.949097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.165µs"
	
	* 
	* ==> kube-proxy [af9c458a401e2dbc512bcd3a1b1ca047bb4020c683d860be06a0c2fee3b9d2d0] <==
	* I0911 11:28:02.091625       1 server_others.go:69] "Using iptables proxy"
	I0911 11:28:02.101026       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0911 11:28:02.176397       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0911 11:28:02.179715       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:28:02.179754       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0911 11:28:02.179765       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0911 11:28:02.179801       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:28:02.180093       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:28:02.180124       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:28:02.181053       1 config.go:188] "Starting service config controller"
	I0911 11:28:02.181589       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:28:02.181717       1 config.go:315] "Starting node config controller"
	I0911 11:28:02.181779       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:28:02.181499       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:28:02.182244       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:28:02.282331       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:28:02.282380       1 shared_informer.go:318] Caches are synced for service config
	I0911 11:28:02.282889       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a3214737a04ce25902e02800ff4168234346baa3bca0c028a69eae7e262ce5e2] <==
	* W0911 11:27:45.782408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:27:45.782423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 11:27:45.782479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:27:45.782493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 11:27:45.782526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:27:45.782555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 11:27:45.782592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:27:45.782610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 11:27:45.782706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 11:27:45.782728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 11:27:45.858231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0911 11:27:45.858319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0911 11:27:45.861121       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:27:45.861325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 11:27:46.623981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:27:46.624013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 11:27:46.724528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:27:46.724567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 11:27:46.767894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:27:46.767924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 11:27:46.866814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 11:27:46.866854       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 11:27:46.905120       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:27:46.905150       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 11:27:49.276144       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.585085    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68f14c0f-00e4-4014-9613-36142d843e61-kube-proxy\") pod \"kube-proxy-s8g9f\" (UID: \"68f14c0f-00e4-4014-9613-36142d843e61\") " pod="kube-system/kube-proxy-s8g9f"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.585261    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54ada390-018a-48d2-841d-5b48f8117601-xtables-lock\") pod \"kindnet-4qgdc\" (UID: \"54ada390-018a-48d2-841d-5b48f8117601\") " pod="kube-system/kindnet-4qgdc"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.585311    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68f14c0f-00e4-4014-9613-36142d843e61-lib-modules\") pod \"kube-proxy-s8g9f\" (UID: \"68f14c0f-00e4-4014-9613-36142d843e61\") " pod="kube-system/kube-proxy-s8g9f"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.585371    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26bn\" (UniqueName: \"kubernetes.io/projected/68f14c0f-00e4-4014-9613-36142d843e61-kube-api-access-t26bn\") pod \"kube-proxy-s8g9f\" (UID: \"68f14c0f-00e4-4014-9613-36142d843e61\") " pod="kube-system/kube-proxy-s8g9f"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.660711    1594 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: I0911 11:28:01.661465    1594 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: W0911 11:28:01.823042    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio-e1f0e2c82cf3bad5df4b215ade3ba263ec68085c322cc2214999150bb28cd706 WatchSource:0}: Error finding container e1f0e2c82cf3bad5df4b215ade3ba263ec68085c322cc2214999150bb28cd706: Status 404 returned error can't find the container with id e1f0e2c82cf3bad5df4b215ade3ba263ec68085c322cc2214999150bb28cd706
	Sep 11 11:28:01 multinode-517978 kubelet[1594]: W0911 11:28:01.823304    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio-50cbdbe4b2252eab8dd6a7a4411cf011236e352031be6945e98ba99a8f939761 WatchSource:0}: Error finding container 50cbdbe4b2252eab8dd6a7a4411cf011236e352031be6945e98ba99a8f939761: Status 404 returned error can't find the container with id 50cbdbe4b2252eab8dd6a7a4411cf011236e352031be6945e98ba99a8f939761
	Sep 11 11:28:02 multinode-517978 kubelet[1594]: I0911 11:28:02.808770    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4qgdc" podStartSLOduration=1.808708832 podCreationTimestamp="2023-09-11 11:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:28:02.808659857 +0000 UTC m=+14.199630935" watchObservedRunningTime="2023-09-11 11:28:02.808708832 +0000 UTC m=+14.199679909"
	Sep 11 11:28:02 multinode-517978 kubelet[1594]: I0911 11:28:02.858922    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s8g9f" podStartSLOduration=1.85882661 podCreationTimestamp="2023-09-11 11:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:28:02.858751162 +0000 UTC m=+14.249722240" watchObservedRunningTime="2023-09-11 11:28:02.85882661 +0000 UTC m=+14.249797687"
	Sep 11 11:28:32 multinode-517978 kubelet[1594]: I0911 11:28:32.956027    1594 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 11 11:28:32 multinode-517978 kubelet[1594]: I0911 11:28:32.978277    1594 topology_manager.go:215] "Topology Admit Handler" podUID="41366aba-7ecd-49af-a3e7-4139062a82c2" podNamespace="kube-system" podName="storage-provisioner"
	Sep 11 11:28:32 multinode-517978 kubelet[1594]: I0911 11:28:32.980048    1594 topology_manager.go:215] "Topology Admit Handler" podUID="b64f2269-78cb-4e36-a2a7-e1818a2b093b" podNamespace="kube-system" podName="coredns-5dd5756b68-lmlsc"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.092959    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41366aba-7ecd-49af-a3e7-4139062a82c2-tmp\") pod \"storage-provisioner\" (UID: \"41366aba-7ecd-49af-a3e7-4139062a82c2\") " pod="kube-system/storage-provisioner"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.093005    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b64f2269-78cb-4e36-a2a7-e1818a2b093b-config-volume\") pod \"coredns-5dd5756b68-lmlsc\" (UID: \"b64f2269-78cb-4e36-a2a7-e1818a2b093b\") " pod="kube-system/coredns-5dd5756b68-lmlsc"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.093030    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fq8m\" (UniqueName: \"kubernetes.io/projected/41366aba-7ecd-49af-a3e7-4139062a82c2-kube-api-access-2fq8m\") pod \"storage-provisioner\" (UID: \"41366aba-7ecd-49af-a3e7-4139062a82c2\") " pod="kube-system/storage-provisioner"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.093052    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrj8d\" (UniqueName: \"kubernetes.io/projected/b64f2269-78cb-4e36-a2a7-e1818a2b093b-kube-api-access-nrj8d\") pod \"coredns-5dd5756b68-lmlsc\" (UID: \"b64f2269-78cb-4e36-a2a7-e1818a2b093b\") " pod="kube-system/coredns-5dd5756b68-lmlsc"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: W0911 11:28:33.314855    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio-f765bcd9d83c787abb4cd3bef27fc70ebc7e55ec2528f06898ad3f3ba006e31f WatchSource:0}: Error finding container f765bcd9d83c787abb4cd3bef27fc70ebc7e55ec2528f06898ad3f3ba006e31f: Status 404 returned error can't find the container with id f765bcd9d83c787abb4cd3bef27fc70ebc7e55ec2528f06898ad3f3ba006e31f
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: W0911 11:28:33.315100    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio-eb88cdffe5df6e94bac7b945845c69f28a777783e530e1b4b314f462a73a69a2 WatchSource:0}: Error finding container eb88cdffe5df6e94bac7b945845c69f28a777783e530e1b4b314f462a73a69a2: Status 404 returned error can't find the container with id eb88cdffe5df6e94bac7b945845c69f28a777783e530e1b4b314f462a73a69a2
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.861008    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lmlsc" podStartSLOduration=32.860958394 podCreationTimestamp="2023-09-11 11:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:28:33.860785345 +0000 UTC m=+45.251756423" watchObservedRunningTime="2023-09-11 11:28:33.860958394 +0000 UTC m=+45.251929470"
	Sep 11 11:28:33 multinode-517978 kubelet[1594]: I0911 11:28:33.869785    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.869736248 podCreationTimestamp="2023-09-11 11:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:28:33.86948988 +0000 UTC m=+45.260460958" watchObservedRunningTime="2023-09-11 11:28:33.869736248 +0000 UTC m=+45.260707325"
	Sep 11 11:29:21 multinode-517978 kubelet[1594]: I0911 11:29:21.375177    1594 topology_manager.go:215] "Topology Admit Handler" podUID="69894672-42f8-46b3-9c22-e5b2f88a7734" podNamespace="default" podName="busybox-5bc68d56bd-qrkdr"
	Sep 11 11:29:21 multinode-517978 kubelet[1594]: I0911 11:29:21.471670    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jpp\" (UniqueName: \"kubernetes.io/projected/69894672-42f8-46b3-9c22-e5b2f88a7734-kube-api-access-k2jpp\") pod \"busybox-5bc68d56bd-qrkdr\" (UID: \"69894672-42f8-46b3-9c22-e5b2f88a7734\") " pod="default/busybox-5bc68d56bd-qrkdr"
	Sep 11 11:29:21 multinode-517978 kubelet[1594]: W0911 11:29:21.719020    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio-e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93 WatchSource:0}: Error finding container e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93: Status 404 returned error can't find the container with id e82373e32f00f19d28bbb5e6aec51b2bed7fc91261e2630e87715ace77badc93
	Sep 11 11:29:23 multinode-517978 kubelet[1594]: I0911 11:29:23.945270    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-qrkdr" podStartSLOduration=1.390332136 podCreationTimestamp="2023-09-11 11:29:21 +0000 UTC" firstStartedPulling="2023-09-11 11:29:21.722827193 +0000 UTC m=+93.113798265" lastFinishedPulling="2023-09-11 11:29:23.277717805 +0000 UTC m=+94.668688864" observedRunningTime="2023-09-11 11:29:23.945078077 +0000 UTC m=+95.336049154" watchObservedRunningTime="2023-09-11 11:29:23.945222735 +0000 UTC m=+95.336193813"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-517978 -n multinode-517978
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-517978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.741621458.exe start -p running-upgrade-398660 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.741621458.exe start -p running-upgrade-398660 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.706424519s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-398660 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-398660 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.325387779s)

                                                
                                                
-- stdout --
	* [running-upgrade-398660] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-398660 in cluster running-upgrade-398660
	* Pulling base image ...
	* Updating the running docker "running-upgrade-398660" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:42:54.987566  330119 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:42:54.987738  330119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:42:54.987747  330119 out.go:309] Setting ErrFile to fd 2...
	I0911 11:42:54.987754  330119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:42:54.987962  330119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:42:54.988541  330119 out.go:303] Setting JSON to false
	I0911 11:42:54.990172  330119 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5123,"bootTime":1694427452,"procs":642,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:42:54.990249  330119 start.go:138] virtualization: kvm guest
	I0911 11:42:54.993498  330119 out.go:177] * [running-upgrade-398660] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:42:54.995952  330119 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:42:54.996020  330119 notify.go:220] Checking for updates...
	I0911 11:42:54.997693  330119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:42:54.999401  330119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:42:55.001210  330119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:42:55.003587  330119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:42:55.005304  330119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:42:55.008186  330119 config.go:182] Loaded profile config "running-upgrade-398660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0911 11:42:55.008213  330119 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:42:55.010268  330119 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 11:42:55.011843  330119 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:42:55.042998  330119 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:42:55.043133  330119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:42:55.121390  330119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:72 SystemTime:2023-09-11 11:42:55.106987728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:42:55.121513  330119 docker.go:294] overlay module found
	I0911 11:42:55.123596  330119 out.go:177] * Using the docker driver based on existing profile
	I0911 11:42:55.125333  330119 start.go:298] selected driver: docker
	I0911 11:42:55.125352  330119 start.go:902] validating driver "docker" against &{Name:running-upgrade-398660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-398660 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:42:55.125471  330119 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:42:55.126536  330119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:42:55.207962  330119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:72 SystemTime:2023-09-11 11:42:55.196804696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:42:55.208246  330119 cni.go:84] Creating CNI manager for ""
	I0911 11:42:55.208272  330119 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0911 11:42:55.208284  330119 start_flags.go:321] config:
	{Name:running-upgrade-398660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-398660 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0911 11:42:55.210455  330119 out.go:177] * Starting control plane node running-upgrade-398660 in cluster running-upgrade-398660
	I0911 11:42:55.212086  330119 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:42:55.213676  330119 out.go:177] * Pulling base image ...
	I0911 11:42:55.215126  330119 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0911 11:42:55.215314  330119 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:42:55.238194  330119 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:42:55.238226  330119 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	W0911 11:42:55.243027  330119 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0911 11:42:55.243232  330119 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/running-upgrade-398660/config.json ...
	I0911 11:42:55.243464  330119 cache.go:107] acquiring lock: {Name:mk384b71cfc0bb66ec786e7643f765b354a98d8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243524  330119 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:42:55.243564  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 11:42:55.243583  330119 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.51µs
	I0911 11:42:55.243599  330119 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 11:42:55.243562  330119 start.go:365] acquiring machines lock for running-upgrade-398660: {Name:mk2dbaa53e051c6fbbcf738bfab8ad9901d53574 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243608  330119 cache.go:107] acquiring lock: {Name:mk92d5f18b459ef1447e41300bc8eadd185c0fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243672  330119 start.go:369] acquired machines lock for "running-upgrade-398660" in 57.439µs
	I0911 11:42:55.243695  330119 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:42:55.243688  330119 cache.go:107] acquiring lock: {Name:mk4e3f16e0fd79216a56d9afca6bd561cd610161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243721  330119 cache.go:107] acquiring lock: {Name:mkfa10c8a6b52d9b4702ced602cb8404dfc2111f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243747  330119 cache.go:107] acquiring lock: {Name:mk869fee9a58f062efe76266f139b731dd047eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243765  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0911 11:42:55.243773  330119 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 54.319µs
	I0911 11:42:55.243782  330119 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0911 11:42:55.243787  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0911 11:42:55.243795  330119 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 50.231µs
	I0911 11:42:55.243822  330119 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0911 11:42:55.243735  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0911 11:42:55.243834  330119 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 149.019µs
	I0911 11:42:55.243837  330119 cache.go:107] acquiring lock: {Name:mk45643055242391004c1a8a71d3b89e39a6e3b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243849  330119 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0911 11:42:55.243814  330119 cache.go:107] acquiring lock: {Name:mkfefa09cc2cbe86eef01bfa3f974908c70eed76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243868  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0911 11:42:55.243703  330119 fix.go:54] fixHost starting: m01
	I0911 11:42:55.243880  330119 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 45.224µs
	I0911 11:42:55.243896  330119 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0911 11:42:55.243874  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0911 11:42:55.243908  330119 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 95.132µs
	I0911 11:42:55.243922  330119 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0911 11:42:55.243795  330119 cache.go:107] acquiring lock: {Name:mk34cddff34b6048f93dc35357fce60b6c7abfc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:42:55.243947  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0911 11:42:55.243958  330119 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 165.539µs
	I0911 11:42:55.243974  330119 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0911 11:42:55.243679  330119 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0911 11:42:55.244065  330119 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 472.022µs
	I0911 11:42:55.244078  330119 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0911 11:42:55.244088  330119 cache.go:87] Successfully saved all images to host disk.
	I0911 11:42:55.244208  330119 cli_runner.go:164] Run: docker container inspect running-upgrade-398660 --format={{.State.Status}}
	I0911 11:42:55.266310  330119 fix.go:102] recreateIfNeeded on running-upgrade-398660: state=Running err=<nil>
	W0911 11:42:55.266344  330119 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:42:55.268828  330119 out.go:177] * Updating the running docker "running-upgrade-398660" container ...
	I0911 11:42:55.270559  330119 machine.go:88] provisioning docker machine ...
	I0911 11:42:55.270603  330119 ubuntu.go:169] provisioning hostname "running-upgrade-398660"
	I0911 11:42:55.270677  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:55.291801  330119 main.go:141] libmachine: Using SSH client type: native
	I0911 11:42:55.292240  330119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0911 11:42:55.292258  330119 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-398660 && echo "running-upgrade-398660" | sudo tee /etc/hostname
	I0911 11:42:55.416441  330119 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-398660
	
	I0911 11:42:55.416533  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:55.440637  330119 main.go:141] libmachine: Using SSH client type: native
	I0911 11:42:55.441282  330119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0911 11:42:55.441311  330119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-398660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-398660/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-398660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:42:55.558127  330119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:42:55.558165  330119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:42:55.558193  330119 ubuntu.go:177] setting up certificates
	I0911 11:42:55.558204  330119 provision.go:83] configureAuth start
	I0911 11:42:55.558283  330119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-398660
	I0911 11:42:55.584627  330119 provision.go:138] copyHostCerts
	I0911 11:42:55.584711  330119 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:42:55.584727  330119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:42:55.584814  330119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:42:55.584921  330119 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:42:55.584930  330119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:42:55.584963  330119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:42:55.585030  330119 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:42:55.585036  330119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:42:55.585065  330119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:42:55.585118  330119 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-398660 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-398660]
	I0911 11:42:55.724543  330119 provision.go:172] copyRemoteCerts
	I0911 11:42:55.724612  330119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:42:55.724659  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:55.744213  330119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/running-upgrade-398660/id_rsa Username:docker}
	I0911 11:42:55.834984  330119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:42:55.855726  330119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 11:42:55.875032  330119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:42:55.898066  330119 provision.go:86] duration metric: configureAuth took 339.845386ms
	I0911 11:42:55.898121  330119 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:42:55.898372  330119 config.go:182] Loaded profile config "running-upgrade-398660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0911 11:42:55.898513  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:55.920343  330119 main.go:141] libmachine: Using SSH client type: native
	I0911 11:42:55.920740  330119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0911 11:42:55.920767  330119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:42:56.387658  330119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:42:56.387755  330119 machine.go:91] provisioned docker machine in 1.11716897s
	I0911 11:42:56.387773  330119 start.go:300] post-start starting for "running-upgrade-398660" (driver="docker")
	I0911 11:42:56.387787  330119 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:42:56.387869  330119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:42:56.387972  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:56.406133  330119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/running-upgrade-398660/id_rsa Username:docker}
	I0911 11:42:56.486387  330119 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:42:56.489414  330119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:42:56.489445  330119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:42:56.489458  330119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:42:56.489467  330119 info.go:137] Remote host: Ubuntu 19.10
	I0911 11:42:56.489481  330119 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:42:56.489549  330119 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:42:56.489640  330119 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:42:56.489748  330119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:42:56.497443  330119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:42:56.514895  330119 start.go:303] post-start completed in 127.103299ms
	I0911 11:42:56.514974  330119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:42:56.515017  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:56.545825  330119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/running-upgrade-398660/id_rsa Username:docker}
	I0911 11:42:56.626908  330119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:42:56.631131  330119 fix.go:56] fixHost completed within 1.387419704s
	I0911 11:42:56.631164  330119 start.go:83] releasing machines lock for "running-upgrade-398660", held for 1.387473899s
	I0911 11:42:56.631236  330119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-398660
	I0911 11:42:56.650410  330119 ssh_runner.go:195] Run: cat /version.json
	I0911 11:42:56.650485  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:56.650536  330119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:42:56.650608  330119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-398660
	I0911 11:42:56.675948  330119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/running-upgrade-398660/id_rsa Username:docker}
	I0911 11:42:56.680087  330119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/running-upgrade-398660/id_rsa Username:docker}
	W0911 11:42:56.792049  330119 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0911 11:42:56.792146  330119 ssh_runner.go:195] Run: systemctl --version
	I0911 11:42:56.797012  330119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:42:56.856341  330119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:42:56.861055  330119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:42:56.878049  330119 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:42:56.878157  330119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:42:56.903118  330119 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:42:56.903139  330119 start.go:466] detecting cgroup driver to use...
	I0911 11:42:56.903171  330119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:42:56.903222  330119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:42:56.924012  330119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:42:56.933174  330119 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:42:56.933238  330119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:42:56.943242  330119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:42:56.953853  330119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0911 11:42:56.962885  330119 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0911 11:42:56.962939  330119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:42:57.042705  330119 docker.go:212] disabling docker service ...
	I0911 11:42:57.042756  330119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:42:57.052643  330119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:42:57.062956  330119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:42:57.139655  330119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:42:57.221087  330119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:42:57.231116  330119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:42:57.245007  330119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0911 11:42:57.245074  330119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:42:57.255936  330119 out.go:177] 
	W0911 11:42:57.257428  330119 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0911 11:42:57.257455  330119 out.go:239] * 
	* 
	W0911 11:42:57.258302  330119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 11:42:57.260395  330119 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-398660 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-11 11:42:57.277819742 +0000 UTC m=+2020.321211942
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-398660
helpers_test.go:235: (dbg) docker inspect running-upgrade-398660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8",
	        "Created": "2023-09-11T11:41:54.661576798Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:41:55.746524109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8/hostname",
	        "HostsPath": "/var/lib/docker/containers/64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8/hosts",
	        "LogPath": "/var/lib/docker/containers/64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8/64e342cc2698c01beda0d503c328417109f270d379290fe21e3f33c913feaac8-json.log",
	        "Name": "/running-upgrade-398660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-398660:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/603140ac03dcfe082d6b4ec8e68dfadaba5cbfaf536a3e9cdb81465fb8024124-init/diff:/var/lib/docker/overlay2/aadaa2f131894150b02cdf35561cd7fe7ddf075924e735c700bf78a5e6daaa78/diff:/var/lib/docker/overlay2/4978c018451b3f5795f917400b8ea73861386a808d14b7257fedd9dcd0ce3616/diff:/var/lib/docker/overlay2/9d5a7dca1324909f954dfb43e80bb49f7cad25e240740a4bad13a803652d124c/diff:/var/lib/docker/overlay2/ae4629c952e8482217fc811aefcb11c4dd83fe703a784be9ea453aedb594ac3b/diff:/var/lib/docker/overlay2/0ff6747b80e9c6076fab39ea40350a82f654e1f65bbd37ad64869b21daadab68/diff:/var/lib/docker/overlay2/842def900d258ee64462ac09308a88871d84ce88f52ac30a6b7b68faa989d8c3/diff:/var/lib/docker/overlay2/5356f83b1d19f0e8cfc1d93c3e3616db3b9f068955c4a7c726ca499fed8344dc/diff:/var/lib/docker/overlay2/f66b4773d683477a3fd77931f03e46c5c8812a4b363ef65ec0c5ee72e0b2120d/diff:/var/lib/docker/overlay2/f5ee2a23609e845dc29978f6e7b537e79886879a9d2be0e01db34e428f2bb9a2/diff:/var/lib/docker/overlay2/8d1ea2
c5dab8261729e95aadbe691329dbc544fe2d93e2649fb5849e0cbe9079/diff:/var/lib/docker/overlay2/70647d730eb171c71806bdac17b55a28df9492bcbc43c591ce136d7e5ef7ba7b/diff:/var/lib/docker/overlay2/79889926a70188e29a581d2cc8133f740b9c6f1f62677fbf2447d5668099cb3e/diff:/var/lib/docker/overlay2/fe1f3e8f2818d385a201c9ef8a6469cb3ca1c236dc901f79366433c9ac64844c/diff:/var/lib/docker/overlay2/ed3d979e0886706e0ff004e3e5ec2622901f6b3fe754cbbade0e8ff3c268008b/diff:/var/lib/docker/overlay2/1e987292e6c6288a77ab42e2bb1873460731cb3cb9559545eee9f3ed01a3c58b/diff:/var/lib/docker/overlay2/b0a1dabc4f134db51af2bb59b2392c47b221953e51944fe78e3adc34ca7961de/diff:/var/lib/docker/overlay2/8293548ec014e9618595685ebb6817d302efec50563d5f81c4342f9ea759fe5e/diff:/var/lib/docker/overlay2/08b48059f2749290e137faf430e3586e52b516421970919ade8760721d08d575/diff:/var/lib/docker/overlay2/51bab3678f4bed3325774f5207ea803df3b1ce6d1dee78992d13e0855d5a25f5/diff:/var/lib/docker/overlay2/4e94222bc48ff2012d0cab0e914650d4bac2599f6bfbe58c22554e8b950d3cc7/diff:/var/lib/d
ocker/overlay2/bcdda67b84a97e81a4498014394cb8194e30014f2340e9247fda395751b74423/diff",
	                "MergedDir": "/var/lib/docker/overlay2/603140ac03dcfe082d6b4ec8e68dfadaba5cbfaf536a3e9cdb81465fb8024124/merged",
	                "UpperDir": "/var/lib/docker/overlay2/603140ac03dcfe082d6b4ec8e68dfadaba5cbfaf536a3e9cdb81465fb8024124/diff",
	                "WorkDir": "/var/lib/docker/overlay2/603140ac03dcfe082d6b4ec8e68dfadaba5cbfaf536a3e9cdb81465fb8024124/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-398660",
	                "Source": "/var/lib/docker/volumes/running-upgrade-398660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-398660",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-398660",
	                "name.minikube.sigs.k8s.io": "running-upgrade-398660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c093d8857994774c6ab8e6e779c6bccab03717c14dc5f9fd2264677b0d952fe0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c093d8857994",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5011f8c98f995102ed8d8098fe23cd167c977d0fa79aafb07d8572e22253bc94",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f52bfef84d77fd27e0db21b1fe0f27eabde8b5ec5ccce1f235043409a01fb816",
	                    "EndpointID": "5011f8c98f995102ed8d8098fe23cd167c977d0fa79aafb07d8572e22253bc94",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-398660 -n running-upgrade-398660
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-398660 -n running-upgrade-398660: exit status 4 (307.288587ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:42:57.557570  331025 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-398660" does not appear in /home/jenkins/minikube-integration/17223-136166/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-398660" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-398660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-398660
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-398660: (2.002758181s)
--- FAIL: TestRunningBinaryUpgrade (65.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3291578644.exe start -p stopped-upgrade-822606 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3291578644.exe start -p stopped-upgrade-822606 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m26.103903765s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3291578644.exe -p stopped-upgrade-822606 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3291578644.exe -p stopped-upgrade-822606 stop: (3.014300617s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-822606 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-822606 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.95284827s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-822606] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-822606 in cluster stopped-upgrade-822606
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-822606" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:41:16.734606  311011 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:41:16.734746  311011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:41:16.734757  311011 out.go:309] Setting ErrFile to fd 2...
	I0911 11:41:16.734763  311011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:41:16.734966  311011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:41:16.735565  311011 out.go:303] Setting JSON to false
	I0911 11:41:16.736936  311011 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5025,"bootTime":1694427452,"procs":676,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:41:16.736995  311011 start.go:138] virtualization: kvm guest
	I0911 11:41:16.739143  311011 out.go:177] * [stopped-upgrade-822606] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:41:16.741101  311011 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:41:16.741122  311011 notify.go:220] Checking for updates...
	I0911 11:41:16.742566  311011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:41:16.744161  311011 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:41:16.745673  311011 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:41:16.747264  311011 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:41:16.748944  311011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:41:16.750812  311011 config.go:182] Loaded profile config "stopped-upgrade-822606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0911 11:41:16.750834  311011 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:41:16.752590  311011 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 11:41:16.754252  311011 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:41:16.786851  311011 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:41:16.787155  311011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:41:16.855090  311011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:56 SystemTime:2023-09-11 11:41:16.844707583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:41:16.855215  311011 docker.go:294] overlay module found
	I0911 11:41:16.863421  311011 out.go:177] * Using the docker driver based on existing profile
	I0911 11:41:16.865054  311011 start.go:298] selected driver: docker
	I0911 11:41:16.865074  311011 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-822606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-822606 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:41:16.865202  311011 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:41:16.866071  311011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:41:16.934532  311011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:63 SystemTime:2023-09-11 11:41:16.92496961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:41:16.934885  311011 cni.go:84] Creating CNI manager for ""
	I0911 11:41:16.934902  311011 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0911 11:41:16.934908  311011 start_flags.go:321] config:
	{Name:stopped-upgrade-822606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-822606 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0911 11:41:16.937262  311011 out.go:177] * Starting control plane node stopped-upgrade-822606 in cluster stopped-upgrade-822606
	I0911 11:41:16.939030  311011 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:41:16.940692  311011 out.go:177] * Pulling base image ...
	I0911 11:41:16.942049  311011 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0911 11:41:16.942154  311011 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:41:16.962549  311011 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:41:16.962583  311011 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	W0911 11:41:16.974419  311011 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0911 11:41:16.974587  311011 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/stopped-upgrade-822606/config.json ...
	I0911 11:41:16.974679  311011 cache.go:107] acquiring lock: {Name:mk384b71cfc0bb66ec786e7643f765b354a98d8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974660  311011 cache.go:107] acquiring lock: {Name:mk869fee9a58f062efe76266f139b731dd047eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974734  311011 cache.go:107] acquiring lock: {Name:mkfa10c8a6b52d9b4702ced602cb8404dfc2111f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974680  311011 cache.go:107] acquiring lock: {Name:mk4e3f16e0fd79216a56d9afca6bd561cd610161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974780  311011 cache.go:107] acquiring lock: {Name:mk34cddff34b6048f93dc35357fce60b6c7abfc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974764  311011 cache.go:107] acquiring lock: {Name:mk45643055242391004c1a8a71d3b89e39a6e3b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974785  311011 cache.go:107] acquiring lock: {Name:mkfefa09cc2cbe86eef01bfa3f974908c70eed76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974797  311011 cache.go:107] acquiring lock: {Name:mk92d5f18b459ef1447e41300bc8eadd185c0fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974903  311011 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:41:16.974931  311011 start.go:365] acquiring machines lock for stopped-upgrade-822606: {Name:mkf847870ca7b3baaa433c3909249a7e1d8ea7fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:41:16.974976  311011 start.go:369] acquired machines lock for "stopped-upgrade-822606" in 38.238µs
	I0911 11:41:16.974991  311011 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:41:16.974996  311011 fix.go:54] fixHost starting: m01
	I0911 11:41:16.975017  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 11:41:16.975034  311011 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 361.816µs
	I0911 11:41:16.975057  311011 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 11:41:16.975185  311011 cli_runner.go:164] Run: docker container inspect stopped-upgrade-822606 --format={{.State.Status}}
	I0911 11:41:16.991355  311011 fix.go:102] recreateIfNeeded on stopped-upgrade-822606: state=Stopped err=<nil>
	W0911 11:41:16.991399  311011 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:41:16.993600  311011 out.go:177] * Restarting existing docker container for "stopped-upgrade-822606" ...
	I0911 11:41:16.995156  311011 cli_runner.go:164] Run: docker start stopped-upgrade-822606
	I0911 11:41:17.208518  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0911 11:41:17.208542  311011 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 233.819161ms
	I0911 11:41:17.208565  311011 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0911 11:41:17.260941  311011 cli_runner.go:164] Run: docker container inspect stopped-upgrade-822606 --format={{.State.Status}}
	I0911 11:41:17.282020  311011 kic.go:426] container "stopped-upgrade-822606" state is running.
	I0911 11:41:17.284355  311011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-822606
	I0911 11:41:17.312450  311011 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/stopped-upgrade-822606/config.json ...
	I0911 11:41:17.312706  311011 machine.go:88] provisioning docker machine ...
	I0911 11:41:17.312736  311011 ubuntu.go:169] provisioning hostname "stopped-upgrade-822606"
	I0911 11:41:17.312787  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:17.333217  311011 main.go:141] libmachine: Using SSH client type: native
	I0911 11:41:17.333660  311011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0911 11:41:17.333678  311011 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-822606 && echo "stopped-upgrade-822606" | sudo tee /etc/hostname
	I0911 11:41:17.334371  311011 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34176->127.0.0.1:33089: read: connection reset by peer
	I0911 11:41:17.671459  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0911 11:41:17.671483  311011 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 696.698877ms
	I0911 11:41:17.671520  311011 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0911 11:41:18.084156  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0911 11:41:18.084187  311011 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.109456654s
	I0911 11:41:18.084204  311011 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0911 11:41:18.300690  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0911 11:41:18.300716  311011 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.326048705s
	I0911 11:41:18.300733  311011 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0911 11:41:18.308555  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0911 11:41:18.308581  311011 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.333933456s
	I0911 11:41:18.308603  311011 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0911 11:41:18.513365  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0911 11:41:18.513389  311011 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.538625477s
	I0911 11:41:18.513401  311011 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0911 11:41:19.182958  311011 cache.go:115] /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0911 11:41:19.182983  311011 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.208205829s
	I0911 11:41:19.182997  311011 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17223-136166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0911 11:41:19.183011  311011 cache.go:87] Successfully saved all images to host disk.
	I0911 11:41:20.450390  311011 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-822606
	
	I0911 11:41:20.450471  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:20.467646  311011 main.go:141] libmachine: Using SSH client type: native
	I0911 11:41:20.468045  311011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0911 11:41:20.468063  311011 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-822606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-822606/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-822606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:41:20.578081  311011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:41:20.578146  311011 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:41:20.578181  311011 ubuntu.go:177] setting up certificates
	I0911 11:41:20.578197  311011 provision.go:83] configureAuth start
	I0911 11:41:20.578262  311011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-822606
	I0911 11:41:20.594463  311011 provision.go:138] copyHostCerts
	I0911 11:41:20.594530  311011 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:41:20.594555  311011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:41:20.594628  311011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:41:20.594753  311011 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:41:20.594767  311011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:41:20.594806  311011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:41:20.594878  311011 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:41:20.594891  311011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:41:20.594922  311011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:41:20.594986  311011 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-822606 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-822606]
	I0911 11:41:21.005225  311011 provision.go:172] copyRemoteCerts
	I0911 11:41:21.005282  311011 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:41:21.005316  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:21.023313  311011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/stopped-upgrade-822606/id_rsa Username:docker}
	I0911 11:41:21.105238  311011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 11:41:21.121821  311011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:41:21.138317  311011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:41:21.154457  311011 provision.go:86] duration metric: configureAuth took 576.242825ms
	I0911 11:41:21.154488  311011 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:41:21.154662  311011 config.go:182] Loaded profile config "stopped-upgrade-822606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0911 11:41:21.154751  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:21.171830  311011 main.go:141] libmachine: Using SSH client type: native
	I0911 11:41:21.172228  311011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0911 11:41:21.172246  311011 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:41:21.872624  311011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:41:21.872657  311011 machine.go:91] provisioned docker machine in 4.559934359s
	I0911 11:41:21.872669  311011 start.go:300] post-start starting for "stopped-upgrade-822606" (driver="docker")
	I0911 11:41:21.872683  311011 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:41:21.872756  311011 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:41:21.872806  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:21.890302  311011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/stopped-upgrade-822606/id_rsa Username:docker}
	I0911 11:41:21.969686  311011 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:41:21.972746  311011 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:41:21.972780  311011 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:41:21.972794  311011 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:41:21.972801  311011 info.go:137] Remote host: Ubuntu 19.10
	I0911 11:41:21.972810  311011 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:41:21.972873  311011 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:41:21.972950  311011 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:41:21.973061  311011 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:41:21.980125  311011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:41:21.997281  311011 start.go:303] post-start completed in 124.592659ms
	I0911 11:41:21.997365  311011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:41:21.997404  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:22.014683  311011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/stopped-upgrade-822606/id_rsa Username:docker}
	I0911 11:41:22.090660  311011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:41:22.094458  311011 fix.go:56] fixHost completed within 5.119456557s
	I0911 11:41:22.094485  311011 start.go:83] releasing machines lock for "stopped-upgrade-822606", held for 5.119495754s
	I0911 11:41:22.094553  311011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-822606
	I0911 11:41:22.110810  311011 ssh_runner.go:195] Run: cat /version.json
	I0911 11:41:22.110866  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:22.110905  311011 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:41:22.110971  311011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-822606
	I0911 11:41:22.130658  311011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/stopped-upgrade-822606/id_rsa Username:docker}
	I0911 11:41:22.131228  311011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/stopped-upgrade-822606/id_rsa Username:docker}
	W0911 11:41:22.205494  311011 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0911 11:41:22.205569  311011 ssh_runner.go:195] Run: systemctl --version
	I0911 11:41:22.236970  311011 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:41:22.285417  311011 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:41:22.289779  311011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:41:22.304752  311011 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:41:22.304850  311011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:41:22.327873  311011 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:41:22.327897  311011 start.go:466] detecting cgroup driver to use...
	I0911 11:41:22.327930  311011 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:41:22.327987  311011 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:41:22.347587  311011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:41:22.356265  311011 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:41:22.356315  311011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:41:22.364779  311011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:41:22.373145  311011 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0911 11:41:22.381874  311011 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0911 11:41:22.381930  311011 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:41:22.445823  311011 docker.go:212] disabling docker service ...
	I0911 11:41:22.445889  311011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:41:22.455715  311011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:41:22.464429  311011 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:41:22.524689  311011 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:41:22.588688  311011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:41:22.597969  311011 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:41:22.610372  311011 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0911 11:41:22.610447  311011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:41:22.620936  311011 out.go:177] 
	W0911 11:41:22.622819  311011 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0911 11:41:22.622840  311011 out.go:239] * 
	* 
	W0911 11:41:22.623680  311011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 11:41:22.627368  311011 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-822606 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (95.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-844693 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-844693 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.636220361s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-844693] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-844693 in cluster pause-844693
	* Pulling base image ...
	* Updating the running docker "pause-844693" container ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-844693" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:43:04.042799  333486 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:43:04.042943  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.042952  333486 out.go:309] Setting ErrFile to fd 2...
	I0911 11:43:04.042957  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.043166  333486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:43:04.043735  333486 out.go:303] Setting JSON to false
	I0911 11:43:04.045353  333486 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5132,"bootTime":1694427452,"procs":842,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:43:04.045427  333486 start.go:138] virtualization: kvm guest
	I0911 11:43:04.090864  333486 out.go:177] * [pause-844693] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:43:04.167289  333486 notify.go:220] Checking for updates...
	I0911 11:43:04.232968  333486 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:43:04.303428  333486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:43:04.402314  333486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:43:04.434527  333486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:43:04.498400  333486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:43:04.560250  333486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:43:04.562773  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:04.563359  333486 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:43:04.587033  333486 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:43:04.587145  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.683210  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:74 SystemTime:2023-09-11 11:43:04.673064676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.683352  333486 docker.go:294] overlay module found
	I0911 11:43:04.685735  333486 out.go:177] * Using the docker driver based on existing profile
	I0911 11:43:04.687310  333486 start.go:298] selected driver: docker
	I0911 11:43:04.687331  333486 start.go:902] validating driver "docker" against &{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.687471  333486 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:43:04.687538  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.777334  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:80 SystemTime:2023-09-11 11:43:04.765950538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.778260  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:04.778281  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:04.778296  333486 start_flags.go:321] config:
	{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.782003  333486 out.go:177] * Starting control plane node pause-844693 in cluster pause-844693
	I0911 11:43:04.783593  333486 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:43:04.785125  333486 out.go:177] * Pulling base image ...
	I0911 11:43:04.786957  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:04.787017  333486 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:43:04.787035  333486 cache.go:57] Caching tarball of preloaded images
	I0911 11:43:04.787103  333486 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:43:04.787134  333486 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:43:04.787145  333486 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:43:04.787352  333486 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/config.json ...
	I0911 11:43:04.808052  333486 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:43:04.808074  333486 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:43:04.808088  333486 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:43:04.808120  333486 start.go:365] acquiring machines lock for pause-844693: {Name:mk61e59c2f16fc85e6756af64b9f30077c437f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:43:04.808179  333486 start.go:369] acquired machines lock for "pause-844693" in 41.449µs
	I0911 11:43:04.808195  333486 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:43:04.808200  333486 fix.go:54] fixHost starting: 
	I0911 11:43:04.808411  333486 cli_runner.go:164] Run: docker container inspect pause-844693 --format={{.State.Status}}
	I0911 11:43:04.829433  333486 fix.go:102] recreateIfNeeded on pause-844693: state=Running err=<nil>
	W0911 11:43:04.829467  333486 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:43:04.832863  333486 out.go:177] * Updating the running docker "pause-844693" container ...
	I0911 11:43:04.834633  333486 machine.go:88] provisioning docker machine ...
	I0911 11:43:04.834660  333486 ubuntu.go:169] provisioning hostname "pause-844693"
	I0911 11:43:04.834739  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:04.855490  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:04.855948  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:04.855960  333486 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-844693 && echo "pause-844693" | sudo tee /etc/hostname
	I0911 11:43:05.046950  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844693
	
	I0911 11:43:05.047034  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.076281  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.076960  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.076989  333486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-844693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-844693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-844693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:05.230841  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:05.230869  333486 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:05.230888  333486 ubuntu.go:177] setting up certificates
	I0911 11:43:05.230898  333486 provision.go:83] configureAuth start
	I0911 11:43:05.230963  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:05.256111  333486 provision.go:138] copyHostCerts
	I0911 11:43:05.256165  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:05.256172  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:05.256235  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:05.256331  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:05.256338  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:05.256361  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:05.256410  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:05.256414  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:05.256433  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:05.256475  333486 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.pause-844693 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-844693]
	I0911 11:43:05.606200  333486 provision.go:172] copyRemoteCerts
	I0911 11:43:05.606281  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:05.606333  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.624128  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:05.721139  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:05.743381  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 11:43:05.805366  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:43:05.829471  333486 provision.go:86] duration metric: configureAuth took 598.55837ms
	I0911 11:43:05.829497  333486 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:05.829731  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:05.829841  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.847201  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.847619  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.847639  333486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:11.285987  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:11.286017  333486 machine.go:91] provisioned docker machine in 6.451367854s
	I0911 11:43:11.286030  333486 start.go:300] post-start starting for "pause-844693" (driver="docker")
	I0911 11:43:11.286042  333486 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:11.286132  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:11.286182  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.307050  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.405300  333486 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:11.408871  333486 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:11.408907  333486 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:11.408920  333486 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:11.408928  333486 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:11.408941  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:11.409004  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:11.409093  333486 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:11.409200  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:11.420179  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:11.446924  333486 start.go:303] post-start completed in 160.874894ms
	I0911 11:43:11.446998  333486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:11.447044  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.468260  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.582593  333486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:11.589173  333486 fix.go:56] fixHost completed within 6.780964082s
	I0911 11:43:11.589199  333486 start.go:83] releasing machines lock for "pause-844693", held for 6.781009426s
	I0911 11:43:11.589270  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:11.613924  333486 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:11.613979  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.613990  333486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:11.614042  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.636822  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.639682  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:12.162675  333486 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:12.168474  333486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:12.464323  333486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:12.472149  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.483211  333486 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:12.483296  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.495300  333486 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:43:12.495326  333486 start.go:466] detecting cgroup driver to use...
	I0911 11:43:12.495359  333486 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:12.495407  333486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:12.571831  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:12.587063  333486 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:12.587110  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:12.607400  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:12.669671  333486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:12.980146  333486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:13.261608  333486 docker.go:212] disabling docker service ...
	I0911 11:43:13.261672  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:13.277615  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:13.292503  333486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:13.664743  333486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:13.894906  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:13.909934  333486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:13.972738  333486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:13.972803  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.987443  333486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:13.987507  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.999727  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.011903  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.058801  333486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:14.068855  333486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:14.080417  333486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:14.092809  333486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:14.303657  333486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:22.396779  333486 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.093069315s)
	I0911 11:43:22.396818  333486 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:22.396886  333486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:22.400563  333486 start.go:534] Will wait 60s for crictl version
	I0911 11:43:22.400646  333486 ssh_runner.go:195] Run: which crictl
	I0911 11:43:22.404691  333486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:22.457728  333486 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:22.457810  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.503118  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.546371  333486 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:22.548178  333486 cli_runner.go:164] Run: docker network inspect pause-844693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:22.567958  333486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:22.572012  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:22.572084  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.620455  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.620481  333486 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:22.620536  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.660449  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.660474  333486 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:22.660545  333486 ssh_runner.go:195] Run: crio config
	I0911 11:43:22.730066  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:22.730098  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:22.730121  333486 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:22.730144  333486 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-844693 NodeName:pause-844693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:22.730297  333486 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-844693"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:22.730362  333486 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-844693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:22.730410  333486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:22.739348  333486 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:22.739429  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:22.747871  333486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0911 11:43:22.764503  333486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:22.782985  333486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0911 11:43:22.805153  333486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:22.808703  333486 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693 for IP: 192.168.76.2
	I0911 11:43:22.808734  333486 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:22.808896  333486 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:22.808951  333486 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:22.809052  333486 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/client.key
	I0911 11:43:22.809142  333486 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key.31bdca25
	I0911 11:43:22.809227  333486 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key
	I0911 11:43:22.809368  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:22.809404  333486 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:22.809431  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:22.809466  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:22.809502  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:22.809536  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:22.809715  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:22.810561  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:22.842061  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:22.870043  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:22.904154  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:22.934359  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:22.959538  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:22.991252  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:23.026499  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:23.051888  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:23.095945  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:23.121500  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:23.145698  333486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:23.171429  333486 ssh_runner.go:195] Run: openssl version
	I0911 11:43:23.178842  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:23.194020  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197914  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197971  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.204630  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:23.214542  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:23.223895  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227164  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227244  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.234043  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:23.243089  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:23.254388  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258040  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258167  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.268875  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:23.279175  333486 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:23.283065  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:43:23.290568  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:43:23.298395  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:43:23.306718  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:43:23.314616  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:43:23.322227  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:43:23.329995  333486 kubeadm.go:404] StartCluster: {Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:23.330192  333486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:23.330276  333486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:23.366608  333486 cri.go:89] found id: "cdf9aa78109f17bfdb382122a5728c8159ea39b39801dbd64eb80d2483cc2cab"
	I0911 11:43:23.366640  333486 cri.go:89] found id: "fdb91a124a6a570b2436748b4ba6a86b898e9d6a13a3930db525639b7ccf74fd"
	I0911 11:43:23.366647  333486 cri.go:89] found id: "aa9227286c98956417f65ee195d8cc9c096f779ac33dd93e51ec1f63e9c64727"
	I0911 11:43:23.366653  333486 cri.go:89] found id: "76d35a166fd5d8b00d62567d0e510be9f811d2a2733ee48dbe533273800db765"
	I0911 11:43:23.366658  333486 cri.go:89] found id: "9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c"
	I0911 11:43:23.366665  333486 cri.go:89] found id: "0885e2fcf44f13ce18fb0b2e5369f657935199c74ef3bb6c3f7d944dd92c903f"
	I0911 11:43:23.366670  333486 cri.go:89] found id: "a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	I0911 11:43:23.366675  333486 cri.go:89] found id: "43b750852cf7cf1ba60fa8e429fff93606a5b2db68b62a2e96080df44d120808"
	I0911 11:43:23.366681  333486 cri.go:89] found id: "98d435edeb4433e8035865016ccf3816a70447275adc8b069cb74e222026044b"
	I0911 11:43:23.366708  333486 cri.go:89] found id: "385f7e6d1f77e5b71772a46ca4a4f24f678c2c4c31f7b142a7d3c41c599e0115"
	I0911 11:43:23.366721  333486 cri.go:89] found id: "abcad4a868fa9e3492e9b8da9cdb9c09be851280ca45cb057ad2790cfbe873f4"
	I0911 11:43:23.366727  333486 cri.go:89] found id: "b3946a720abf45cb0400edf2961b8177cee7ded0d89a67215949fba8eed0285f"
	I0911 11:43:23.366738  333486 cri.go:89] found id: "1de4fb6c7d34a7290d7a4ddb1c1dcc8c2f6b06fbd043dab5a2b4c9385bee8829"
	I0911 11:43:23.366744  333486 cri.go:89] found id: "a131faaa13e53100059367ccbeb807c8ca911aaee113f897c694d56b0847b530"
	I0911 11:43:23.366759  333486 cri.go:89] found id: "dbe08d5d45acc84a41457fc5fd2e252933fc14c88b84fb18bb6d48ae40109115"
	I0911 11:43:23.366764  333486 cri.go:89] found id: "dbd37dfbd8007b159842812dbf088fe24d51c704801c40d390145bd3ef1ee2b7"
	I0911 11:43:23.366773  333486 cri.go:89] found id: ""
	I0911 11:43:23.366819  333486 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-844693
helpers_test.go:235: (dbg) docker inspect pause-844693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f",
	        "Created": "2023-09-11T11:42:27.240188919Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 322838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:42:27.599694483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
	        "ResolvConfPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/hostname",
	        "HostsPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/hosts",
	        "LogPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f-json.log",
	        "Name": "/pause-844693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-844693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-844693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054-init/diff:/var/lib/docker/overlay2/5fefd4c14d5bc4d7d67c2f6371e7160909b1f4d0d9a655e2a127286f8f0bbb5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-844693",
	                "Source": "/var/lib/docker/volumes/pause-844693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-844693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-844693",
	                "name.minikube.sigs.k8s.io": "pause-844693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2e8989114acb2afcb6842c5918c1b59f132ddb21924fa1f0153a952d44500d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b2e8989114ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-844693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "19301acdf740",
	                        "pause-844693"
	                    ],
	                    "NetworkID": "816421c11511d905aaf1996ddf2d307ce7959ea60956a7b767ca58a7b283d397",
	                    "EndpointID": "275b1aa06486b89b207c63ab44405821efdc62d48dec642592f40653ed38ee3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-844693 -n pause-844693
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-844693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-844693 logs -n 25: (1.450500265s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| delete  | -p force-systemd-flag-682524          | force-systemd-flag-682524 | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| start   | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-341786 sudo           | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| delete  | -p offline-crio-341798                | offline-crio-341798       | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:40 UTC |
	| start   | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:40 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:41 UTC |
	| start   | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-782427             | missing-upgrade-782427    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:42 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-822606             | stopped-upgrade-822606    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-822606             | stopped-upgrade-822606    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	| start   | -p cert-options-645915                | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-645915 ssh               | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-645915 -- sudo        | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-645915                | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	| delete  | -p missing-upgrade-782427             | missing-upgrade-782427    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	| start   | -p pause-844693 --memory=2048         | pause-844693              | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:43 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-352590             | cert-expiration-352590    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-398660             | running-upgrade-398660    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-398660             | running-upgrade-398660    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	| delete  | -p cert-expiration-352590             | cert-expiration-352590    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:43 UTC |
	| start   | -p auto-917885 --memory=3072          | auto-917885               | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kindnet-917885                     | kindnet-917885            | jenkins | v1.31.2 | 11 Sep 23 11:43 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-844693                       | pause-844693              | jenkins | v1.31.2 | 11 Sep 23 11:43 UTC | 11 Sep 23 11:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:43:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:43:04.042799  333486 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:43:04.042943  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.042952  333486 out.go:309] Setting ErrFile to fd 2...
	I0911 11:43:04.042957  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.043166  333486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:43:04.043735  333486 out.go:303] Setting JSON to false
	I0911 11:43:04.045353  333486 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5132,"bootTime":1694427452,"procs":842,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:43:04.045427  333486 start.go:138] virtualization: kvm guest
	I0911 11:43:04.090864  333486 out.go:177] * [pause-844693] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:43:04.167289  333486 notify.go:220] Checking for updates...
	I0911 11:43:04.232968  333486 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:43:04.303428  333486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:43:04.402314  333486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:43:04.434527  333486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:43:04.498400  333486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:43:04.560250  333486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:42:59.890833  332029 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0911 11:42:59.891074  332029 start.go:159] libmachine.API.Create for "auto-917885" (driver="docker")
	I0911 11:42:59.891097  332029 client.go:168] LocalClient.Create starting
	I0911 11:42:59.891149  332029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:42:59.891178  332029 main.go:141] libmachine: Decoding PEM data...
	I0911 11:42:59.891194  332029 main.go:141] libmachine: Parsing certificate...
	I0911 11:42:59.891251  332029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:42:59.891269  332029 main.go:141] libmachine: Decoding PEM data...
	I0911 11:42:59.891277  332029 main.go:141] libmachine: Parsing certificate...
	I0911 11:42:59.891579  332029 cli_runner.go:164] Run: docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:42:59.908806  332029 cli_runner.go:211] docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:42:59.908882  332029 network_create.go:281] running [docker network inspect auto-917885] to gather additional debugging logs...
	I0911 11:42:59.908907  332029 cli_runner.go:164] Run: docker network inspect auto-917885
	W0911 11:42:59.927118  332029 cli_runner.go:211] docker network inspect auto-917885 returned with exit code 1
	I0911 11:42:59.927161  332029 network_create.go:284] error running [docker network inspect auto-917885]: docker network inspect auto-917885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-917885 not found
	I0911 11:42:59.927185  332029 network_create.go:286] output of [docker network inspect auto-917885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-917885 not found
	
	** /stderr **
	I0911 11:42:59.927239  332029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:42:59.946001  332029 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20e875ef8442 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d7:c6:0a:5c} reservation:<nil>}
	I0911 11:42:59.946764  332029 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40f62e59100c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ee:21:f8:bd} reservation:<nil>}
	I0911 11:42:59.947517  332029 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a151a90a714a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:1f:ed:6f:6b} reservation:<nil>}
	I0911 11:42:59.948366  332029 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-816421c11511 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:0b:f6:61:1a} reservation:<nil>}
	I0911 11:42:59.950079  332029 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001709dd0}
	I0911 11:42:59.950167  332029 network_create.go:123] attempt to create docker network auto-917885 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0911 11:42:59.950245  332029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-917885 auto-917885
	I0911 11:43:00.009154  332029 network_create.go:107] docker network auto-917885 192.168.85.0/24 created
	I0911 11:43:00.009194  332029 kic.go:117] calculated static IP "192.168.85.2" for the "auto-917885" container
	I0911 11:43:00.009272  332029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:43:00.028336  332029 cli_runner.go:164] Run: docker volume create auto-917885 --label name.minikube.sigs.k8s.io=auto-917885 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:43:00.051685  332029 oci.go:103] Successfully created a docker volume auto-917885
	I0911 11:43:00.051779  332029 cli_runner.go:164] Run: docker run --rm --name auto-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-917885 --entrypoint /usr/bin/test -v auto-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:43:00.894980  332029 oci.go:107] Successfully prepared a docker volume auto-917885
	I0911 11:43:00.895021  332029 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:00.895048  332029 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:43:00.895132  332029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:43:04.579049  332029 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (3.683860902s)
	I0911 11:43:04.579086  332029 kic.go:199] duration metric: took 3.684033 seconds to extract preloaded images to volume
	W0911 11:43:04.579270  332029 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:43:04.579408  332029 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:43:04.562773  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:04.563359  333486 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:43:04.587033  333486 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:43:04.587145  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.683210  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:74 SystemTime:2023-09-11 11:43:04.673064676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.683352  333486 docker.go:294] overlay module found
	I0911 11:43:04.685735  333486 out.go:177] * Using the docker driver based on existing profile
	I0911 11:43:04.687310  333486 start.go:298] selected driver: docker
	I0911 11:43:04.687331  333486 start.go:902] validating driver "docker" against &{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.687471  333486 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:43:04.687538  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.777334  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:80 SystemTime:2023-09-11 11:43:04.765950538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.778260  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:04.778281  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:04.778296  333486 start_flags.go:321] config:
	{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.782003  333486 out.go:177] * Starting control plane node pause-844693 in cluster pause-844693
	I0911 11:43:04.783593  333486 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:43:04.785125  333486 out.go:177] * Pulling base image ...
	I0911 11:43:04.786957  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:04.787017  333486 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:43:04.787035  333486 cache.go:57] Caching tarball of preloaded images
	I0911 11:43:04.787103  333486 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:43:04.787134  333486 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:43:04.787145  333486 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:43:04.787352  333486 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/config.json ...
	I0911 11:43:04.808052  333486 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:43:04.808074  333486 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:43:04.808088  333486 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:43:04.808120  333486 start.go:365] acquiring machines lock for pause-844693: {Name:mk61e59c2f16fc85e6756af64b9f30077c437f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:43:04.808179  333486 start.go:369] acquired machines lock for "pause-844693" in 41.449µs
	I0911 11:43:04.808195  333486 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:43:04.808200  333486 fix.go:54] fixHost starting: 
	I0911 11:43:04.808411  333486 cli_runner.go:164] Run: docker container inspect pause-844693 --format={{.State.Status}}
	I0911 11:43:04.829433  333486 fix.go:102] recreateIfNeeded on pause-844693: state=Running err=<nil>
	W0911 11:43:04.829467  333486 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:43:04.832863  333486 out.go:177] * Updating the running docker "pause-844693" container ...
	I0911 11:43:01.366401  332971 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0911 11:43:01.366627  332971 start.go:159] libmachine.API.Create for "kindnet-917885" (driver="docker")
	I0911 11:43:01.366653  332971 client.go:168] LocalClient.Create starting
	I0911 11:43:01.366711  332971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:43:01.366742  332971 main.go:141] libmachine: Decoding PEM data...
	I0911 11:43:01.366756  332971 main.go:141] libmachine: Parsing certificate...
	I0911 11:43:01.366819  332971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:43:01.366837  332971 main.go:141] libmachine: Decoding PEM data...
	I0911 11:43:01.366848  332971 main.go:141] libmachine: Parsing certificate...
	I0911 11:43:01.367146  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:43:01.386272  332971 cli_runner.go:211] docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:43:01.386392  332971 network_create.go:281] running [docker network inspect kindnet-917885] to gather additional debugging logs...
	I0911 11:43:01.386423  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885
	W0911 11:43:01.404358  332971 cli_runner.go:211] docker network inspect kindnet-917885 returned with exit code 1
	I0911 11:43:01.404396  332971 network_create.go:284] error running [docker network inspect kindnet-917885]: docker network inspect kindnet-917885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-917885 not found
	I0911 11:43:01.404427  332971 network_create.go:286] output of [docker network inspect kindnet-917885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-917885 not found
	
	** /stderr **
	I0911 11:43:01.404491  332971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:01.424104  332971 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20e875ef8442 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d7:c6:0a:5c} reservation:<nil>}
	I0911 11:43:01.424980  332971 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40f62e59100c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ee:21:f8:bd} reservation:<nil>}
	I0911 11:43:01.425749  332971 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a151a90a714a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:1f:ed:6f:6b} reservation:<nil>}
	I0911 11:43:01.426716  332971 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-816421c11511 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:0b:f6:61:1a} reservation:<nil>}
	I0911 11:43:01.427602  332971 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-32603fed1456 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:6c:c2:0d:6a} reservation:<nil>}
	I0911 11:43:01.428435  332971 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001563ab0}
	I0911 11:43:01.428467  332971 network_create.go:123] attempt to create docker network kindnet-917885 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0911 11:43:01.428526  332971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-917885 kindnet-917885
	I0911 11:43:01.500197  332971 network_create.go:107] docker network kindnet-917885 192.168.94.0/24 created
	I0911 11:43:01.500232  332971 kic.go:117] calculated static IP "192.168.94.2" for the "kindnet-917885" container
	I0911 11:43:01.500333  332971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:43:01.517738  332971 cli_runner.go:164] Run: docker volume create kindnet-917885 --label name.minikube.sigs.k8s.io=kindnet-917885 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:43:01.538051  332971 oci.go:103] Successfully created a docker volume kindnet-917885
	I0911 11:43:01.538199  332971 cli_runner.go:164] Run: docker run --rm --name kindnet-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --entrypoint /usr/bin/test -v kindnet-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:43:04.578040  332971 cli_runner.go:217] Completed: docker run --rm --name kindnet-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --entrypoint /usr/bin/test -v kindnet-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib: (3.03978727s)
	I0911 11:43:04.578075  332971 oci.go:107] Successfully prepared a docker volume kindnet-917885
	I0911 11:43:04.578122  332971 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:04.578148  332971 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:43:04.578222  332971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:43:04.686171  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:04.686587  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:04.686632  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:04.686678  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:04.739051  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:04.739073  306855 cri.go:89] found id: ""
	I0911 11:43:04.739083  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:04.739138  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:04.744552  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:04.744624  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:04.793439  306855 cri.go:89] found id: ""
	I0911 11:43:04.793463  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.793474  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:04.793482  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:04.793537  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:04.833969  306855 cri.go:89] found id: ""
	I0911 11:43:04.833992  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.833999  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:04.834005  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:04.834061  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:04.879099  306855 cri.go:89] found id: ""
	I0911 11:43:04.879129  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.879139  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:04.879185  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:04.879252  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:04.921504  306855 cri.go:89] found id: ""
	I0911 11:43:04.921533  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.921542  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:04.921550  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:04.921616  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:04.959890  306855 cri.go:89] found id: ""
	I0911 11:43:04.959920  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.959931  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:04.959940  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:04.959997  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:05.005187  306855 cri.go:89] found id: ""
	I0911 11:43:05.005218  306855 logs.go:284] 0 containers: []
	W0911 11:43:05.005231  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:05.005239  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:05.005313  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:05.076523  306855 cri.go:89] found id: ""
	I0911 11:43:05.076544  306855 logs.go:284] 0 containers: []
	W0911 11:43:05.076553  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:05.076576  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:05.076643  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:05.107963  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:05.108075  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0911 11:43:05.194411  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0911 11:43:05.194435  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:05.194449  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:05.252239  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:05.252281  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:05.285393  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:05.285447  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:05.337247  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:05.337276  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:04.834633  333486 machine.go:88] provisioning docker machine ...
	I0911 11:43:04.834660  333486 ubuntu.go:169] provisioning hostname "pause-844693"
	I0911 11:43:04.834739  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:04.855490  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:04.855948  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:04.855960  333486 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-844693 && echo "pause-844693" | sudo tee /etc/hostname
	I0911 11:43:05.046950  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844693
	
	I0911 11:43:05.047034  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.076281  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.076960  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.076989  333486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-844693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-844693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-844693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:05.230841  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:05.230869  333486 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:05.230888  333486 ubuntu.go:177] setting up certificates
	I0911 11:43:05.230898  333486 provision.go:83] configureAuth start
	I0911 11:43:05.230963  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:05.256111  333486 provision.go:138] copyHostCerts
	I0911 11:43:05.256165  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:05.256172  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:05.256235  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:05.256331  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:05.256338  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:05.256361  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:05.256410  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:05.256414  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:05.256433  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:05.256475  333486 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.pause-844693 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-844693]
	I0911 11:43:05.606200  333486 provision.go:172] copyRemoteCerts
	I0911 11:43:05.606281  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:05.606333  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.624128  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:05.721139  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:05.743381  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 11:43:05.805366  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:43:05.829471  333486 provision.go:86] duration metric: configureAuth took 598.55837ms
	I0911 11:43:05.829497  333486 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:05.829731  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:05.829841  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.847201  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.847619  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.847639  333486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:04.672319  332029 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-917885 --name auto-917885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-917885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-917885 --network auto-917885 --ip 192.168.85.2 --volume auto-917885:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:43:05.061375  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Running}}
	I0911 11:43:05.086925  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.116366  332029 cli_runner.go:164] Run: docker exec auto-917885 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:43:05.168142  332029 oci.go:144] the created container "auto-917885" has a running status.
	I0911 11:43:05.168178  332029 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa...
	I0911 11:43:05.330664  332029 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:43:05.356164  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.380464  332029 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:43:05.380489  332029 kic_runner.go:114] Args: [docker exec --privileged auto-917885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:43:05.463815  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.485171  332029 machine.go:88] provisioning docker machine ...
	I0911 11:43:05.485217  332029 ubuntu.go:169] provisioning hostname "auto-917885"
	I0911 11:43:05.485285  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:05.510336  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.511014  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:05.511047  332029 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-917885 && echo "auto-917885" | sudo tee /etc/hostname
	I0911 11:43:05.511774  332029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36070->127.0.0.1:33112: read: connection reset by peer
	I0911 11:43:08.681008  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-917885
	
	I0911 11:43:08.681110  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:08.701951  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:08.702660  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:08.702695  332029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-917885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-917885/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-917885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:08.834321  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:08.834351  332029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:08.834394  332029 ubuntu.go:177] setting up certificates
	I0911 11:43:08.834407  332029 provision.go:83] configureAuth start
	I0911 11:43:08.834458  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:08.853165  332029 provision.go:138] copyHostCerts
	I0911 11:43:08.853229  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:08.853238  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:08.853317  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:08.853400  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:08.853404  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:08.853430  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:08.853480  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:08.853484  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:08.853502  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:08.853542  332029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.auto-917885 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube auto-917885]
	I0911 11:43:09.205329  332029 provision.go:172] copyRemoteCerts
	I0911 11:43:09.205412  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:09.205460  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.223875  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.321383  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:09.347511  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0911 11:43:09.371634  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:43:09.395019  332029 provision.go:86] duration metric: configureAuth took 560.593148ms
	I0911 11:43:09.395046  332029 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:09.395245  332029 config.go:182] Loaded profile config "auto-917885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:09.395377  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.413145  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:09.413547  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:09.413563  332029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:09.632896  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:09.632922  332029 machine.go:91] provisioned docker machine in 4.147727191s
	I0911 11:43:09.632931  332029 client.go:171] LocalClient.Create took 9.741829209s
	I0911 11:43:09.632948  332029 start.go:167] duration metric: libmachine.API.Create for "auto-917885" took 9.741873554s
	I0911 11:43:09.632956  332029 start.go:300] post-start starting for "auto-917885" (driver="docker")
	I0911 11:43:09.632967  332029 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:09.633042  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:09.633087  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.650687  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.743242  332029 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:09.746542  332029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:09.746582  332029 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:09.746622  332029 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:09.746636  332029 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:09.746650  332029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:09.746717  332029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:09.746810  332029 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:09.746920  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:09.755141  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:09.777523  332029 start.go:303] post-start completed in 144.551819ms
	I0911 11:43:09.777932  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:09.795105  332029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/config.json ...
	I0911 11:43:09.795362  332029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:09.795405  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.812704  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.903024  332029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:09.907160  332029 start.go:128] duration metric: createHost completed in 10.01850521s
	I0911 11:43:09.907195  332029 start.go:83] releasing machines lock for "auto-917885", held for 10.018698576s
	I0911 11:43:09.907265  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:09.924513  332029 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:09.924558  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.924622  332029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:09.924691  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.942032  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.943207  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:10.119101  332029 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:10.123422  332029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:10.263704  332029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:10.268064  332029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:10.286060  332029 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:10.286172  332029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:10.313886  332029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:43:10.313909  332029 start.go:466] detecting cgroup driver to use...
	I0911 11:43:10.313939  332029 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:10.313979  332029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:10.328259  332029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:10.338639  332029 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:10.338715  332029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:10.350904  332029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:10.364059  332029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:10.439750  332029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:10.515881  332029 docker.go:212] disabling docker service ...
	I0911 11:43:10.515940  332029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:10.533588  332029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:10.544152  332029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:10.627817  332029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:10.716047  332029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:10.726750  332029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:10.741105  332029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:10.741166  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.749865  332029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:10.749922  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.758917  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.767720  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.776532  332029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:10.784678  332029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:10.792031  332029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:10.799286  332029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:10.874361  332029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:10.981651  332029 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:10.981707  332029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:10.985258  332029 start.go:534] Will wait 60s for crictl version
	I0911 11:43:10.985299  332029 ssh_runner.go:195] Run: which crictl
	I0911 11:43:10.988407  332029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:11.024605  332029 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:11.024693  332029 ssh_runner.go:195] Run: crio --version
	I0911 11:43:11.065205  332029 ssh_runner.go:195] Run: crio --version
	I0911 11:43:11.110271  332029 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:08.277182  332971 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (3.698899018s)
	I0911 11:43:08.277236  332971 kic.go:199] duration metric: took 3.699082 seconds to extract preloaded images to volume
	W0911 11:43:08.277396  332971 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:43:08.277525  332971 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:43:08.339126  332971 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-917885 --name kindnet-917885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-917885 --network kindnet-917885 --ip 192.168.94.2 --volume kindnet-917885:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:43:08.696209  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Running}}
	I0911 11:43:08.717474  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.735412  332971 cli_runner.go:164] Run: docker exec kindnet-917885 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:43:08.779586  332971 oci.go:144] the created container "kindnet-917885" has a running status.
	I0911 11:43:08.779624  332971 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa...
	I0911 11:43:08.881366  332971 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:43:08.904514  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.923295  332971 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:43:08.923326  332971 kic_runner.go:114] Args: [docker exec --privileged kindnet-917885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:43:08.977313  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.999407  332971 machine.go:88] provisioning docker machine ...
	I0911 11:43:08.999450  332971 ubuntu.go:169] provisioning hostname "kindnet-917885"
	I0911 11:43:08.999517  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:09.021290  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:09.022008  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:09.022036  332971 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-917885 && echo "kindnet-917885" | sudo tee /etc/hostname
	I0911 11:43:09.022740  332971 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39798->127.0.0.1:33117: read: connection reset by peer
	I0911 11:43:07.931564  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:07.938025  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:07.938085  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:07.938177  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:07.972271  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:07.972291  306855 cri.go:89] found id: ""
	I0911 11:43:07.972297  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:07.972352  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:07.975786  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:07.975837  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:08.009408  306855 cri.go:89] found id: ""
	I0911 11:43:08.009436  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.009445  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:08.009451  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:08.009502  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:08.044449  306855 cri.go:89] found id: ""
	I0911 11:43:08.044485  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.044496  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:08.044504  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:08.044558  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:08.078114  306855 cri.go:89] found id: ""
	I0911 11:43:08.078143  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.078153  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:08.078161  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:08.078218  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:08.112488  306855 cri.go:89] found id: ""
	I0911 11:43:08.112515  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.112522  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:08.112527  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:08.112590  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:08.145800  306855 cri.go:89] found id: ""
	I0911 11:43:08.145826  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.145835  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:08.145841  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:08.145905  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:08.181648  306855 cri.go:89] found id: ""
	I0911 11:43:08.181678  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.181688  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:08.181696  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:08.181757  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:08.238219  306855 cri.go:89] found id: ""
	I0911 11:43:08.238242  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.238249  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:08.238260  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:08.238274  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:08.337476  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:08.337513  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:08.372154  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:08.372192  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0911 11:43:08.463038  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0911 11:43:08.463063  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:08.463076  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:08.505986  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:08.506026  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:08.534161  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:08.534274  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:11.078379  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:11.078811  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:11.078875  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:11.078937  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:11.116445  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:11.116470  306855 cri.go:89] found id: ""
	I0911 11:43:11.116480  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:11.116535  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:11.120217  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:11.120273  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:11.163427  306855 cri.go:89] found id: ""
	I0911 11:43:11.163451  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.163461  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:11.163467  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:11.163525  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:11.206377  306855 cri.go:89] found id: ""
	I0911 11:43:11.206402  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.206412  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:11.206419  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:11.206475  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:11.254480  306855 cri.go:89] found id: ""
	I0911 11:43:11.254522  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.254534  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:11.254542  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:11.254622  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:11.299758  306855 cri.go:89] found id: ""
	I0911 11:43:11.299796  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.299807  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:11.299816  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:11.299874  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:11.363586  306855 cri.go:89] found id: ""
	I0911 11:43:11.363621  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.363632  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:11.363641  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:11.363700  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:11.405113  306855 cri.go:89] found id: ""
	I0911 11:43:11.405133  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.405140  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:11.405145  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:11.405192  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:11.482830  306855 cri.go:89] found id: ""
	I0911 11:43:11.482854  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.482863  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:11.482874  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:11.482893  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 11:43:11.112173  332029 cli_runner.go:164] Run: docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:11.131480  332029 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:11.135231  332029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:11.146900  332029 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:11.146959  332029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:11.210931  332029 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:11.210960  332029 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:11.211011  332029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:11.257774  332029 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:11.257794  332029 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:11.257856  332029 ssh_runner.go:195] Run: crio config
	I0911 11:43:11.320036  332029 cni.go:84] Creating CNI manager for ""
	I0911 11:43:11.320072  332029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:11.320098  332029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:11.320121  332029 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-917885 NodeName:auto-917885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:11.320295  332029 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-917885"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:11.320389  332029 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-917885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:auto-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:11.320457  332029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:11.330713  332029 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:11.330778  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:11.339920  332029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0911 11:43:11.356791  332029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:11.379085  332029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0911 11:43:11.398515  332029 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:11.403013  332029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:11.415061  332029 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885 for IP: 192.168.85.2
	I0911 11:43:11.415122  332029 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.415305  332029 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:11.415358  332029 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:11.415434  332029 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key
	I0911 11:43:11.415453  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt with IP's: []
	I0911 11:43:11.726512  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt ...
	I0911 11:43:11.726543  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: {Name:mkac4ac31b98b98f96543b23e868530abc293031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.726762  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key ...
	I0911 11:43:11.726779  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key: {Name:mkd8011a457ecb6c9a92be3bbc3ddb4af3b9db6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.726876  332029 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c
	I0911 11:43:11.726891  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:43:11.852956  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c ...
	I0911 11:43:11.852987  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c: {Name:mk0db99fc6cf59a4b0bf55893b96a80bfd62b42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.853190  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c ...
	I0911 11:43:11.853205  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c: {Name:mkf40d76208a91167f509bb89a5cd0baee31f7e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.853296  332029 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt
	I0911 11:43:11.853391  332029 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key
	I0911 11:43:11.853444  332029 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key
	I0911 11:43:11.853462  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt with IP's: []
	I0911 11:43:12.041495  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt ...
	I0911 11:43:12.041523  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt: {Name:mk60399ec28d898ea32193c36fb15f7d975e6000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:12.041673  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key ...
	I0911 11:43:12.041683  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key: {Name:mk6a19de294038c20e55b5fcb30414e7a5745cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:12.041835  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:12.041869  332029 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:12.041879  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:12.041909  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:12.041936  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:12.041958  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:12.041996  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:12.042608  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:12.068485  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:12.099318  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:12.122381  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:12.144796  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:12.170257  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:12.201978  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:12.226414  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:12.248570  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:12.281698  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:12.308814  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:12.337544  332029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:12.355271  332029 ssh_runner.go:195] Run: openssl version
	I0911 11:43:12.361375  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:12.372515  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.376164  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.376226  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.383499  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:12.393618  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:12.403617  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.407153  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.407217  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.414335  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:12.424372  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:12.433178  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.436421  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.436468  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.442613  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:12.451231  332029 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:12.454403  332029 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:43:12.454459  332029 kubeadm.go:404] StartCluster: {Name:auto-917885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:12.454552  332029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:12.454603  332029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:12.507699  332029 cri.go:89] found id: ""
	I0911 11:43:12.507786  332029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:43:12.516677  332029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:43:12.524887  332029 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:43:12.524946  332029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:43:12.533165  332029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:43:12.533211  332029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:43:12.586668  332029 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:43:12.586934  332029 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:43:12.629539  332029 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:43:12.629625  332029 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:43:12.629671  332029 kubeadm.go:322] OS: Linux
	I0911 11:43:12.629720  332029 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:43:12.629761  332029 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:43:12.629823  332029 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:43:12.629866  332029 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:43:12.629916  332029 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:43:12.629987  332029 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:43:12.630027  332029 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:43:12.630075  332029 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:43:12.630200  332029 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:43:12.715287  332029 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:43:12.715431  332029 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:43:12.715566  332029 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:43:12.948298  332029 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:43:11.285987  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:11.286017  333486 machine.go:91] provisioned docker machine in 6.451367854s
	I0911 11:43:11.286030  333486 start.go:300] post-start starting for "pause-844693" (driver="docker")
	I0911 11:43:11.286042  333486 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:11.286132  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:11.286182  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.307050  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.405300  333486 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:11.408871  333486 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:11.408907  333486 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:11.408920  333486 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:11.408928  333486 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:11.408941  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:11.409004  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:11.409093  333486 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:11.409200  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:11.420179  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:11.446924  333486 start.go:303] post-start completed in 160.874894ms
	I0911 11:43:11.446998  333486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:11.447044  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.468260  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.582593  333486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:11.589173  333486 fix.go:56] fixHost completed within 6.780964082s
	I0911 11:43:11.589199  333486 start.go:83] releasing machines lock for "pause-844693", held for 6.781009426s
	I0911 11:43:11.589270  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:11.613924  333486 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:11.613979  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.613990  333486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:11.614042  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.636822  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.639682  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:12.162675  333486 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:12.168474  333486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:12.464323  333486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:12.472149  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.483211  333486 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:12.483296  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.495300  333486 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:43:12.495326  333486 start.go:466] detecting cgroup driver to use...
	I0911 11:43:12.495359  333486 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:12.495407  333486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:12.571831  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:12.587063  333486 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:12.587110  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:12.607400  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:12.669671  333486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:12.980146  333486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:13.261608  333486 docker.go:212] disabling docker service ...
	I0911 11:43:13.261672  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:13.277615  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:13.292503  333486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:13.664743  333486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:13.894906  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:13.909934  333486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:13.972738  333486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:13.972803  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.987443  333486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:13.987507  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.999727  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.011903  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:12.951479  332029 out.go:204]   - Generating certificates and keys ...
	I0911 11:43:12.951651  332029 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:43:12.951725  332029 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:43:13.212510  332029 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:43:13.380152  332029 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:43:13.558298  332029 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:43:13.744661  332029 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:43:13.939764  332029 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:43:13.939977  332029 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-917885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0911 11:43:14.184252  332029 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:43:14.184445  332029 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-917885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0911 11:43:14.266337  332029 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:43:14.419439  332029 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:43:12.175779  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-917885
	
	I0911 11:43:12.175864  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.203871  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:12.204276  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:12.204289  332971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-917885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-917885/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-917885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:12.342026  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:12.342056  332971 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:12.342082  332971 ubuntu.go:177] setting up certificates
	I0911 11:43:12.342114  332971 provision.go:83] configureAuth start
	I0911 11:43:12.342177  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:12.359887  332971 provision.go:138] copyHostCerts
	I0911 11:43:12.359957  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:12.359968  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:12.360048  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:12.360211  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:12.360221  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:12.360258  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:12.360375  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:12.360383  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:12.360419  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:12.360551  332971 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.kindnet-917885 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-917885]
	I0911 11:43:12.646544  332971 provision.go:172] copyRemoteCerts
	I0911 11:43:12.646611  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:12.646647  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.676089  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:12.780659  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:12.808561  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0911 11:43:12.832784  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:43:12.855894  332971 provision.go:86] duration metric: configureAuth took 513.760393ms
	I0911 11:43:12.855923  332971 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:12.856117  332971 config.go:182] Loaded profile config "kindnet-917885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:12.856227  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.882601  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:12.883252  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:12.883278  332971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:13.147541  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:13.147571  332971 machine.go:91] provisioned docker machine in 4.148135425s
	I0911 11:43:13.147582  332971 client.go:171] LocalClient.Create took 11.780924164s
	I0911 11:43:13.147601  332971 start.go:167] duration metric: libmachine.API.Create for "kindnet-917885" took 11.780973851s
	I0911 11:43:13.147611  332971 start.go:300] post-start starting for "kindnet-917885" (driver="docker")
	I0911 11:43:13.147622  332971 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:13.147684  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:13.147725  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.169512  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.273841  332971 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:13.277733  332971 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:13.277779  332971 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:13.277800  332971 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:13.277809  332971 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:13.277822  332971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:13.277883  332971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:13.277978  332971 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:13.278123  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:13.289212  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:13.316654  332971 start.go:303] post-start completed in 169.029243ms
	I0911 11:43:13.316995  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:13.333622  332971 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/config.json ...
	I0911 11:43:13.333945  332971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:13.333998  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.351639  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.446901  332971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:13.451110  332971 start.go:128] duration metric: createHost completed in 12.087115886s
	I0911 11:43:13.451135  332971 start.go:83] releasing machines lock for "kindnet-917885", held for 12.087293995s
	I0911 11:43:13.451204  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:13.475139  332971 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:13.475156  332971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:13.475197  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.475217  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.502685  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.506202  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.686524  332971 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:13.691782  332971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:13.834658  332971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:13.838901  332971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:13.857419  332971 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:13.857504  332971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:13.889783  332971 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:43:13.889807  332971 start.go:466] detecting cgroup driver to use...
	I0911 11:43:13.889839  332971 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:13.889887  332971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:13.911298  332971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:13.922049  332971 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:13.922143  332971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:13.935275  332971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:13.948828  332971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:14.038870  332971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:14.132803  332971 docker.go:212] disabling docker service ...
	I0911 11:43:14.132867  332971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:14.153826  332971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:14.168806  332971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:14.254891  332971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:14.353671  332971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:14.365017  332971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:14.380992  332971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:14.381053  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.390515  332971 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:14.390601  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.399509  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.408420  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.418034  332971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:14.426909  332971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:14.434708  332971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:14.442150  332971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:14.532616  332971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:14.642577  332971 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:14.642649  332971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:14.646131  332971 start.go:534] Will wait 60s for crictl version
	I0911 11:43:14.646189  332971 ssh_runner.go:195] Run: which crictl
	I0911 11:43:14.649268  332971 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:14.687495  332971 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:14.687576  332971 ssh_runner.go:195] Run: crio --version
	I0911 11:43:14.721799  332971 ssh_runner.go:195] Run: crio --version
	I0911 11:43:14.759450  332971 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:14.696099  332029 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:43:14.696225  332029 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:43:14.974410  332029 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:43:15.031360  332029 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:43:15.299732  332029 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:43:15.398904  332029 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:43:15.399368  332029 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:43:15.401605  332029 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:43:14.760871  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:14.777778  332971 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:14.781248  332971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:14.791372  332971 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:14.791435  332971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:14.841708  332971 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:14.841728  332971 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:14.841772  332971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:14.874988  332971 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:14.875007  332971 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:14.875060  332971 ssh_runner.go:195] Run: crio config
	I0911 11:43:14.924211  332971 cni.go:84] Creating CNI manager for "kindnet"
	I0911 11:43:14.924245  332971 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:14.924264  332971 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-917885 NodeName:kindnet-917885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:14.924390  332971 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-917885"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:14.924456  332971 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kindnet-917885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:kindnet-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:14.924510  332971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:14.933032  332971 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:14.933097  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:14.941039  332971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (424 bytes)
	I0911 11:43:14.957743  332971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:14.975098  332971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0911 11:43:14.992439  332971 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:14.995785  332971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:15.006543  332971 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885 for IP: 192.168.94.2
	I0911 11:43:15.006576  332971 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.006744  332971 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:15.006806  332971 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:15.006860  332971 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key
	I0911 11:43:15.006881  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt with IP's: []
	I0911 11:43:15.500709  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt ...
	I0911 11:43:15.500739  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: {Name:mk184d328255d58730b1965ed92467ece818018a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.500915  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key ...
	I0911 11:43:15.500929  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key: {Name:mk8607643ccf2f9e1d15a7c037e1efa764518611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.501031  332971 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a
	I0911 11:43:15.501050  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:43:15.596310  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a ...
	I0911 11:43:15.596347  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a: {Name:mk751e13964bc37fa4a76f7995d79f02afa0a9e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.596566  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a ...
	I0911 11:43:15.596602  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a: {Name:mkdee36fc4b75a13c92537734e4550c412c1cbaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.596701  332971 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt
	I0911 11:43:15.596804  332971 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key
	I0911 11:43:15.596875  332971 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key
	I0911 11:43:15.596896  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt with IP's: []
	I0911 11:43:15.991782  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt ...
	I0911 11:43:15.991814  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt: {Name:mk58cdbbc2657bead8f89c4f146e8867a51970ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.992022  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key ...
	I0911 11:43:15.992042  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key: {Name:mk378e269914afe4c94d09dbb1c953b4b89df556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.992265  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:15.992326  332971 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:15.992344  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:15.992377  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:15.992413  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:15.992450  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:15.992505  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:15.993101  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:16.016416  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:16.043237  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:16.066974  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:16.088527  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:16.110440  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:16.131946  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:16.154788  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:16.176596  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:16.197916  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:16.218954  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:16.241388  332971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:16.257192  332971 ssh_runner.go:195] Run: openssl version
	I0911 11:43:16.262303  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:16.270779  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.273879  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.273936  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.280017  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:16.288370  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:16.297025  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.300167  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.300225  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.306383  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:16.315021  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:16.323475  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.326930  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.326990  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.333804  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:16.343188  332971 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:16.346425  332971 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:43:16.346487  332971 kubeadm.go:404] StartCluster: {Name:kindnet-917885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:16.346561  332971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:16.346602  332971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:16.379741  332971 cri.go:89] found id: ""
	I0911 11:43:16.379813  332971 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:43:16.388030  332971 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:43:16.396200  332971 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:43:16.396256  332971 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:43:16.403992  332971 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:43:16.404033  332971 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:43:16.448187  332971 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:43:16.448283  332971 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:43:16.488076  332971 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:43:16.488210  332971 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:43:16.488280  332971 kubeadm.go:322] OS: Linux
	I0911 11:43:16.488351  332971 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:43:16.488416  332971 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:43:16.488509  332971 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:43:16.488593  332971 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:43:16.488662  332971 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:43:16.488730  332971 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:43:16.488788  332971 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:43:16.488866  332971 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:43:16.488946  332971 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:43:16.566159  332971 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:43:16.566309  332971 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:43:16.566435  332971 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:43:16.830495  332971 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:43:14.058801  333486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:14.068855  333486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:14.080417  333486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:14.092809  333486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:14.303657  333486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:16.833610  332971 out.go:204]   - Generating certificates and keys ...
	I0911 11:43:16.833688  332971 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:43:16.833743  332971 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:43:16.907374  332971 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:43:16.974579  332971 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:43:17.273372  332971 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:43:17.438573  332971 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:43:18.062622  332971 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:43:18.063014  332971 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-917885 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0911 11:43:18.224739  332971 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:43:18.224925  332971 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-917885 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0911 11:43:18.487293  332971 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:43:18.587347  332971 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:43:18.758385  332971 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:43:18.758528  332971 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:43:18.912595  332971 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:43:19.188710  332971 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:43:19.374728  332971 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:43:19.538312  332971 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:43:19.538689  332971 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:43:19.541780  332971 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:43:15.404982  332029 out.go:204]   - Booting up control plane ...
	I0911 11:43:15.405172  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:43:15.405286  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:43:15.405347  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:43:15.413262  332029 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:43:15.415167  332029 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:43:15.415247  332029 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:43:15.494868  332029 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:43:19.544126  332971 out.go:204]   - Booting up control plane ...
	I0911 11:43:19.544313  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:43:19.544416  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:43:19.544505  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:43:19.552820  332971 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:43:19.553600  332971 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:43:19.553644  332971 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:43:19.634899  332971 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:43:20.496972  332029 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002214 seconds
	I0911 11:43:20.497155  332029 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:43:20.509465  332029 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:43:21.033834  332029 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:43:21.034134  332029 kubeadm.go:322] [mark-control-plane] Marking the node auto-917885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:43:21.543489  332029 kubeadm.go:322] [bootstrap-token] Using token: hlx2xk.l6ot2giuv12spqqx
	I0911 11:43:21.545230  332029 out.go:204]   - Configuring RBAC rules ...
	I0911 11:43:21.545391  332029 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:43:21.549047  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:43:21.557345  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:43:21.561844  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:43:21.564975  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:43:21.567957  332029 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:43:21.580480  332029 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:43:21.847586  332029 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:43:21.965929  332029 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:43:21.967538  332029 kubeadm.go:322] 
	I0911 11:43:21.967617  332029 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:43:21.967624  332029 kubeadm.go:322] 
	I0911 11:43:21.967717  332029 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:43:21.967724  332029 kubeadm.go:322] 
	I0911 11:43:21.967753  332029 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:43:21.967818  332029 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:43:21.967880  332029 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:43:21.967887  332029 kubeadm.go:322] 
	I0911 11:43:21.967951  332029 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 11:43:21.967958  332029 kubeadm.go:322] 
	I0911 11:43:21.968014  332029 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:43:21.968021  332029 kubeadm.go:322] 
	I0911 11:43:21.968087  332029 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:43:21.968183  332029 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:43:21.968271  332029 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:43:21.968278  332029 kubeadm.go:322] 
	I0911 11:43:21.968373  332029 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:43:21.968459  332029 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:43:21.968464  332029 kubeadm.go:322] 
	I0911 11:43:21.968561  332029 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hlx2xk.l6ot2giuv12spqqx \
	I0911 11:43:21.968680  332029 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:43:21.968705  332029 kubeadm.go:322] 	--control-plane 
	I0911 11:43:21.968711  332029 kubeadm.go:322] 
	I0911 11:43:21.968811  332029 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:43:21.968818  332029 kubeadm.go:322] 
	I0911 11:43:21.968917  332029 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hlx2xk.l6ot2giuv12spqqx \
	I0911 11:43:21.969042  332029 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:43:21.971960  332029 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:43:21.972127  332029 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:43:21.972151  332029 cni.go:84] Creating CNI manager for ""
	I0911 11:43:21.972159  332029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:21.974181  332029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:43:22.396779  333486 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.093069315s)
	I0911 11:43:22.396818  333486 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:22.396886  333486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:22.400563  333486 start.go:534] Will wait 60s for crictl version
	I0911 11:43:22.400646  333486 ssh_runner.go:195] Run: which crictl
	I0911 11:43:22.404691  333486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:22.457728  333486 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:22.457810  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.503118  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.546371  333486 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:21.573218  306855 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.090302165s)
	W0911 11:43:21.573260  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0911 11:43:21.573271  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:21.573284  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:21.622868  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:21.622917  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:21.651460  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:21.651498  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:21.701273  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:21.701309  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:21.784515  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:21.784569  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:22.548178  333486 cli_runner.go:164] Run: docker network inspect pause-844693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:22.567958  333486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:22.572012  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:22.572084  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.620455  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.620481  333486 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:22.620536  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.660449  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.660474  333486 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:22.660545  333486 ssh_runner.go:195] Run: crio config
	I0911 11:43:22.730066  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:22.730098  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:22.730121  333486 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:22.730144  333486 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-844693 NodeName:pause-844693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:22.730297  333486 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-844693"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:22.730362  333486 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-844693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:22.730410  333486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:22.739348  333486 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:22.739429  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:22.747871  333486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0911 11:43:22.764503  333486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:22.782985  333486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0911 11:43:22.805153  333486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:22.808703  333486 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693 for IP: 192.168.76.2
	I0911 11:43:22.808734  333486 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:22.808896  333486 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:22.808951  333486 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:22.809052  333486 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/client.key
	I0911 11:43:22.809142  333486 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key.31bdca25
	I0911 11:43:22.809227  333486 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key
	I0911 11:43:22.809368  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:22.809404  333486 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:22.809431  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:22.809466  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:22.809502  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:22.809536  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:22.809715  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:22.810561  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:22.842061  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:22.870043  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:22.904154  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:22.934359  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:22.959538  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:22.991252  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:23.026499  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:23.051888  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:23.095945  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:23.121500  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:23.145698  333486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:23.171429  333486 ssh_runner.go:195] Run: openssl version
	I0911 11:43:23.178842  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:23.194020  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197914  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197971  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.204630  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:23.214542  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:23.223895  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227164  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227244  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.234043  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:23.243089  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:23.254388  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258040  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258167  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.268875  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:23.279175  333486 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:23.283065  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:43:23.290568  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:43:23.298395  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:43:23.306718  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:43:23.314616  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:43:23.322227  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:43:23.329995  333486 kubeadm.go:404] StartCluster: {Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:23.330192  333486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:23.330276  333486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:23.366608  333486 cri.go:89] found id: "cdf9aa78109f17bfdb382122a5728c8159ea39b39801dbd64eb80d2483cc2cab"
	I0911 11:43:23.366640  333486 cri.go:89] found id: "fdb91a124a6a570b2436748b4ba6a86b898e9d6a13a3930db525639b7ccf74fd"
	I0911 11:43:23.366647  333486 cri.go:89] found id: "aa9227286c98956417f65ee195d8cc9c096f779ac33dd93e51ec1f63e9c64727"
	I0911 11:43:23.366653  333486 cri.go:89] found id: "76d35a166fd5d8b00d62567d0e510be9f811d2a2733ee48dbe533273800db765"
	I0911 11:43:23.366658  333486 cri.go:89] found id: "9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c"
	I0911 11:43:23.366665  333486 cri.go:89] found id: "0885e2fcf44f13ce18fb0b2e5369f657935199c74ef3bb6c3f7d944dd92c903f"
	I0911 11:43:23.366670  333486 cri.go:89] found id: "a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	I0911 11:43:23.366675  333486 cri.go:89] found id: "43b750852cf7cf1ba60fa8e429fff93606a5b2db68b62a2e96080df44d120808"
	I0911 11:43:23.366681  333486 cri.go:89] found id: "98d435edeb4433e8035865016ccf3816a70447275adc8b069cb74e222026044b"
	I0911 11:43:23.366708  333486 cri.go:89] found id: "385f7e6d1f77e5b71772a46ca4a4f24f678c2c4c31f7b142a7d3c41c599e0115"
	I0911 11:43:23.366721  333486 cri.go:89] found id: "abcad4a868fa9e3492e9b8da9cdb9c09be851280ca45cb057ad2790cfbe873f4"
	I0911 11:43:23.366727  333486 cri.go:89] found id: "b3946a720abf45cb0400edf2961b8177cee7ded0d89a67215949fba8eed0285f"
	I0911 11:43:23.366738  333486 cri.go:89] found id: "1de4fb6c7d34a7290d7a4ddb1c1dcc8c2f6b06fbd043dab5a2b4c9385bee8829"
	I0911 11:43:23.366744  333486 cri.go:89] found id: "a131faaa13e53100059367ccbeb807c8ca911aaee113f897c694d56b0847b530"
	I0911 11:43:23.366759  333486 cri.go:89] found id: "dbe08d5d45acc84a41457fc5fd2e252933fc14c88b84fb18bb6d48ae40109115"
	I0911 11:43:23.366764  333486 cri.go:89] found id: "dbd37dfbd8007b159842812dbf088fe24d51c704801c40d390145bd3ef1ee2b7"
	I0911 11:43:23.366773  333486 cri.go:89] found id: ""
	I0911 11:43:23.366819  333486 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* Sep 11 11:43:28 pause-844693 crio[3183]: time="2023-09-11 11:43:28.259997204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 11 11:43:28 pause-844693 crio[3183]: time="2023-09-11 11:43:28.260040736Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.308739248Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=2ed7ff41-d4a0-4aaa-894e-c5fa73c2f200 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.308982789Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2ed7ff41-d4a0-4aaa-894e-c5fa73c2f200 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.309571640Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=4a75d97a-8eb4-4831-ae6f-69135afac812 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.309763998Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a75d97a-8eb4-4831-ae6f-69135afac812 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.310627948Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-zvh8m/coredns" id=0d45080a-b153-40df-a6f6-83b9329c49ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.310712569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.322970026Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e79c00afa1225c4c51b21280de83a6015eef9423c76e55397281bd1e634fce2c/merged/etc/passwd: no such file or directory"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.323020868Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e79c00afa1225c4c51b21280de83a6015eef9423c76e55397281bd1e634fce2c/merged/etc/group: no such file or directory"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.378603079Z" level=info msg="Created container b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3: kube-system/coredns-5dd5756b68-zvh8m/coredns" id=0d45080a-b153-40df-a6f6-83b9329c49ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.379196632Z" level=info msg="Starting container: b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3" id=d09b7ac3-f4f2-4657-9da3-a9f43d36d26e name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.388045015Z" level=info msg="Started container" PID=4114 containerID=b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3 description=kube-system/coredns-5dd5756b68-zvh8m/coredns id=d09b7ac3-f4f2-4657-9da3-a9f43d36d26e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.308660150Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=1625a166-0c0e-4415-8ad1-33bb87c75a66 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.308862663Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1625a166-0c0e-4415-8ad1-33bb87c75a66 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.309651949Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=589b0da9-909a-4206-9c42-7f0d83bfed7f name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.309847666Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=589b0da9-909a-4206-9c42-7f0d83bfed7f name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.310859010Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-2gn29/coredns" id=e46f2c02-73b9-4f0a-b8e0-4162c28e1512 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.310949034Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.323021313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/388c9dd6b22665f6960c5e0c86c5ca48667aafd5e93a27bfb89615fc5fcc150a/merged/etc/passwd: no such file or directory"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.323073076Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/388c9dd6b22665f6960c5e0c86c5ca48667aafd5e93a27bfb89615fc5fcc150a/merged/etc/group: no such file or directory"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.381326955Z" level=info msg="Created container 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c: kube-system/coredns-5dd5756b68-2gn29/coredns" id=e46f2c02-73b9-4f0a-b8e0-4162c28e1512 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.381961962Z" level=info msg="Starting container: 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c" id=22c48042-f4b7-4078-83c4-4b1b6ad3a966 name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.390984441Z" level=info msg="Started container" PID=4167 containerID=504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c description=kube-system/coredns-5dd5756b68-2gn29/coredns id=22c48042-f4b7-4078-83c4-4b1b6ad3a966 name=/runtime.v1.RuntimeService/StartContainer sandboxID=570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.621713749Z" level=info msg="Stopping container: 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c (timeout: 30s)" id=803dd1a7-888c-45a7-a0e7-3c8d9e28bbcb name=/runtime.v1.RuntimeService/StopContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	504dd4136806c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   3 seconds ago       Running             coredns                   2                   570ed816a3ca6       coredns-5dd5756b68-2gn29
	b2fbe23930c38       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   6 seconds ago       Running             coredns                   2                   f1bbe20f37ff2       coredns-5dd5756b68-zvh8m
	ac4f8827ccd76       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   22 seconds ago      Running             kube-apiserver            2                   f886ff95e63b0       kube-apiserver-pause-844693
	835bc7b9b230e       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   22 seconds ago      Running             kube-controller-manager   2                   caf398077a4f1       kube-controller-manager-pause-844693
	8a7deea25aedf       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   23 seconds ago      Running             kube-scheduler            2                   3143e4acee751       kube-scheduler-pause-844693
	5068566eb8b8e       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   23 seconds ago      Running             kindnet-cni               2                   4214cedbf1d53       kindnet-7tct8
	bb8630f2c0c73       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   23 seconds ago      Running             kube-proxy                2                   6b8bcfa7f2e07       kube-proxy-gfzb6
	d7048e0b4e834       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   8801f7e114cbe       etcd-pause-844693
	cdf9aa78109f1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   35 seconds ago      Exited              etcd                      1                   8801f7e114cbe       etcd-pause-844693
	fdb91a124a6a5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   35 seconds ago      Exited              kube-apiserver            1                   f886ff95e63b0       kube-apiserver-pause-844693
	aa9227286c989       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   35 seconds ago      Exited              kube-controller-manager   1                   caf398077a4f1       kube-controller-manager-pause-844693
	76d35a166fd5d       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   35 seconds ago      Exited              kube-scheduler            1                   3143e4acee751       kube-scheduler-pause-844693
	9a62d90cca609       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   36 seconds ago      Exited              coredns                   1                   f1bbe20f37ff2       coredns-5dd5756b68-zvh8m
	0885e2fcf44f1       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   36 seconds ago      Exited              kube-proxy                1                   6b8bcfa7f2e07       kube-proxy-gfzb6
	a441792974757       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   36 seconds ago      Exited              coredns                   1                   570ed816a3ca6       coredns-5dd5756b68-2gn29
	43b750852cf7c       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   36 seconds ago      Exited              kindnet-cni               1                   4214cedbf1d53       kindnet-7tct8
	
	* 
	* ==> coredns [504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52149 - 64496 "HINFO IN 5024998056764530279.7459128487944673813. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019545154s
	
	* 
	* ==> coredns [9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39801 - 34182 "HINFO IN 2913630947870021945.1069754061682992805. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021065668s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53885 - 27010 "HINFO IN 5498766548903002151.6473315833692776976. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031123367s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42195 - 23296 "HINFO IN 2791760603007898383.7344023142748220875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020483865s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-844693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-844693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=pause-844693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_42_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:42:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-844693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:43:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-844693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd36efc168a545e19e5a580c2e506316
	  System UUID:                8ce237c8-20ba-4507-af0e-40571ac4a272
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2gn29                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     50s
	  kube-system                 coredns-5dd5756b68-zvh8m                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     50s
	  kube-system                 etcd-pause-844693                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-7tct8                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      49s
	  kube-system                 kube-apiserver-pause-844693             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-controller-manager-pause-844693    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-proxy-gfzb6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-scheduler-pause-844693             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 69s)  kubelet          Node pause-844693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 69s)  kubelet          Node pause-844693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 69s)  kubelet          Node pause-844693 status is now: NodeHasSufficientPID
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s                kubelet          Node pause-844693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s                kubelet          Node pause-844693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet          Node pause-844693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node pause-844693 event: Registered Node pause-844693 in Controller
	  Normal  NodeReady                48s                kubelet          Node pause-844693 status is now: NodeReady
	  Normal  RegisteredNode           7s                 node-controller  Node pause-844693 event: Registered Node pause-844693 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.255658] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +8.191293] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:32] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000008] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +1.001369] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000006] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +2.015800] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000021] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:33] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000025] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +8.195301] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:36] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000009] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +1.011311] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000025] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +2.019772] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +4.187654] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +8.191342] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[Sep11 11:40] process 'docker/tmp/qemu-check437207382/check' started with executable stack
	
	* 
	* ==> etcd [cdf9aa78109f17bfdb382122a5728c8159ea39b39801dbd64eb80d2483cc2cab] <==
	* {"level":"info","ts":"2023-09-11T11:43:13.890145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.891743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:13.891733Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-844693 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:43:13.891754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:13.893183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-11T11:43:13.892574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:13.893276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:13.893764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T11:43:14.319019Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-11T11:43:14.319178Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-844693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2023-09-11T11:43:14.319277Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.319354Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.319504Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38658","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:38658: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.367237Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.3673Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-11T11:43:14.367352Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-09-11T11:43:14.370155Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:14.370261Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:14.370299Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-844693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [d7048e0b4e8348142a7d3d7b1571b7df79b4b35a53c9f8793e6235036b8c14e7] <==
	* {"level":"info","ts":"2023-09-11T11:43:24.88992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:43:24.889958Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:43:24.890014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-09-11T11:43:24.89021Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-09-11T11:43:24.890378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:43:24.890424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:43:24.89326Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:43:24.893344Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:24.893362Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:24.893636Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:43:24.893728Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:43:26.768191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.769756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:26.769759Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-844693 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:43:26.769783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:26.77006Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:26.770083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:26.771024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-11T11:43:26.771133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:43:47 up  1:26,  0 users,  load average: 6.58, 4.16, 2.49
	Linux pause-844693 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [43b750852cf7cf1ba60fa8e429fff93606a5b2db68b62a2e96080df44d120808] <==
	* I0911 11:43:11.667937       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0911 11:43:11.668130       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0911 11:43:11.668347       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:43:11.668393       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:43:11.668435       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:43:12.059009       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0911 11:43:12.059259       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kindnet [5068566eb8b8e9b7882e28cde4266b0c0493bf561be465a46bd9e8934d040a26] <==
	* I0911 11:43:25.063184       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0911 11:43:25.063247       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0911 11:43:25.063436       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:43:25.063455       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:43:25.063483       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:43:28.165637       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0911 11:43:28.167422       1 main.go:227] handling current node
	I0911 11:43:38.181733       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0911 11:43:38.181758       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ac4f8827ccd7654f7332de2fa03fe664c40df7b333f8d5c3f10073848d4af152] <==
	* I0911 11:43:27.935381       1 controller.go:85] Starting OpenAPI V3 controller
	I0911 11:43:27.936156       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0911 11:43:27.937115       1 aggregator.go:164] waiting for initial CRD sync...
	I0911 11:43:27.937129       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0911 11:43:27.937134       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0911 11:43:27.937179       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:43:27.937273       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:43:28.062521       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:43:28.063426       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:43:28.068586       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 11:43:28.072433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:43:28.074333       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:43:28.074416       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:43:28.074450       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:43:28.074499       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:43:28.074531       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:43:28.158715       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:43:28.158726       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:43:28.161370       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:43:28.158827       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0911 11:43:28.164632       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 11:43:28.940065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:43:40.896398       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:43:40.951410       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0911 11:43:40.995339       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [fdb91a124a6a570b2436748b4ba6a86b898e9d6a13a3930db525639b7ccf74fd] <==
	*   "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 11:43:14.325104       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 11:43:14.325345       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W0911 11:43:14.358394       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: rpc error: code = Internal desc = server closed the stream without sending trailers
	E0911 11:43:14.358564       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: rpc error: code = Internal desc = server closed the stream without sending trailers; reinitializing...
	
	* 
	* ==> kube-controller-manager [835bc7b9b230ee62639349a0da59602136db9aee6c3f4f8b1dd733343e69f213] <==
	* I0911 11:43:40.701434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.6µs"
	I0911 11:43:40.719741       1 shared_informer.go:318] Caches are synced for endpoint
	I0911 11:43:40.731009       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 11:43:40.801732       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:43:40.807924       1 shared_informer.go:318] Caches are synced for taint
	I0911 11:43:40.808033       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0911 11:43:40.808082       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0911 11:43:40.808143       1 taint_manager.go:211] "Sending events to api server"
	I0911 11:43:40.808159       1 event.go:307] "Event occurred" object="pause-844693" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-844693 event: Registered Node pause-844693 in Controller"
	I0911 11:43:40.808220       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-844693"
	I0911 11:43:40.808313       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0911 11:43:40.896655       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:43:40.955177       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0911 11:43:40.960348       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-2gn29"
	I0911 11:43:40.967566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.489019ms"
	I0911 11:43:40.978264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.628899ms"
	I0911 11:43:40.978415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.374µs"
	I0911 11:43:41.210486       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:43:41.243686       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:43:41.243723       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0911 11:43:41.625604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.974µs"
	I0911 11:43:41.642943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.978649ms"
	I0911 11:43:41.643055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.689µs"
	I0911 11:43:44.636878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.944µs"
	I0911 11:43:44.647024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.659µs"
	
	* 
	* ==> kube-controller-manager [aa9227286c98956417f65ee195d8cc9c096f779ac33dd93e51ec1f63e9c64727] <==
	* I0911 11:43:13.305810       1 serving.go:348] Generated self-signed cert in-memory
	I0911 11:43:14.111231       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0911 11:43:14.111265       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:14.112500       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:43:14.112628       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:43:14.113329       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0911 11:43:14.113373       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [0885e2fcf44f13ce18fb0b2e5369f657935199c74ef3bb6c3f7d944dd92c903f] <==
	* I0911 11:43:11.893874       1 server_others.go:69] "Using iptables proxy"
	E0911 11:43:11.896303       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-844693": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [bb8630f2c0c739856ec1d9f5ae6e2cb86e6529c519f3a9f7a41a0c884b6df3f7] <==
	* I0911 11:43:24.786572       1 server_others.go:69] "Using iptables proxy"
	E0911 11:43:24.791732       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-844693": dial tcp 192.168.76.2:8443: connect: connection refused
	I0911 11:43:28.166246       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0911 11:43:28.269163       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0911 11:43:28.271458       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:43:28.271498       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0911 11:43:28.271507       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0911 11:43:28.271548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:43:28.271848       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:43:28.271909       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:28.272601       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:43:28.272664       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:43:28.272623       1 config.go:188] "Starting service config controller"
	I0911 11:43:28.273260       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:43:28.272644       1 config.go:315] "Starting node config controller"
	I0911 11:43:28.273286       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:43:28.373065       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:43:28.373447       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:43:28.373468       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [76d35a166fd5d8b00d62567d0e510be9f811d2a2733ee48dbe533273800db765] <==
	* I0911 11:43:13.179735       1 serving.go:348] Generated self-signed cert in-memory
	
	* 
	* ==> kube-scheduler [8a7deea25aedf792cb3feb59e6880809860c455ca2386b933bc5322f4e9d34b6] <==
	* I0911 11:43:25.709816       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:43:28.061004       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:43:28.061049       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0911 11:43:28.061063       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:43:28.061072       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:43:28.161097       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 11:43:28.161136       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:28.164377       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:43:28.164489       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:43:28.165375       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:43:28.165463       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:43:28.265973       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.516591    1591 status_manager.go:853] "Failed to get status for pod" podUID="8cc32ea88c75cf7fa9232edbcac5cac2" pod="kube-system/kube-scheduler-pause-844693" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-844693\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.516835    1591 status_manager.go:853] "Failed to get status for pod" podUID="57657833-600d-4091-86e2-a3cf9e965575" pod="kube-system/kindnet-7tct8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-7tct8\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.517073    1591 status_manager.go:853] "Failed to get status for pod" podUID="04ed39a0-59eb-429d-9b7f-73582d75d816" pod="kube-system/kube-proxy-gfzb6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfzb6\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.517386    1591 status_manager.go:853] "Failed to get status for pod" podUID="f99fc8c7-b3f8-47b9-a741-686b6d387773" pod="kube-system/coredns-5dd5756b68-zvh8m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zvh8m\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.517745    1591 status_manager.go:853] "Failed to get status for pod" podUID="ade2d2da-baae-423c-8c9a-6294d0d22277" pod="kube-system/coredns-5dd5756b68-2gn29" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2gn29\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.518046    1591 status_manager.go:853] "Failed to get status for pod" podUID="ff0209a81991e1d78879d688b130f8c3" pod="kube-system/etcd-pause-844693" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-844693\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.520167    1591 scope.go:117] "RemoveContainer" containerID="b3946a720abf45cb0400edf2961b8177cee7ded0d89a67215949fba8eed0285f"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.570763    1591 scope.go:117] "RemoveContainer" containerID="abcad4a868fa9e3492e9b8da9cdb9c09be851280ca45cb057ad2790cfbe873f4"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.665719    1591 scope.go:117] "RemoveContainer" containerID="98d435edeb4433e8035865016ccf3816a70447275adc8b069cb74e222026044b"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.761980    1591 scope.go:117] "RemoveContainer" containerID="385f7e6d1f77e5b71772a46ca4a4f24f678c2c4c31f7b142a7d3c41c599e0115"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.796026    1591 scope.go:117] "RemoveContainer" containerID="a131faaa13e53100059367ccbeb807c8ca911aaee113f897c694d56b0847b530"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.881750    1591 scope.go:117] "RemoveContainer" containerID="dbe08d5d45acc84a41457fc5fd2e252933fc14c88b84fb18bb6d48ae40109115"
	Sep 11 11:43:24 pause-844693 kubelet[1591]: I0911 11:43:24.967258    1591 scope.go:117] "RemoveContainer" containerID="dbd37dfbd8007b159842812dbf088fe24d51c704801c40d390145bd3ef1ee2b7"
	Sep 11 11:43:28 pause-844693 kubelet[1591]: E0911 11:43:28.059239    1591 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 11 11:43:28 pause-844693 kubelet[1591]: E0911 11:43:28.069026    1591 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 11 11:43:30 pause-844693 kubelet[1591]: I0911 11:43:30.129156    1591 scope.go:117] "RemoveContainer" containerID="9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c"
	Sep 11 11:43:30 pause-844693 kubelet[1591]: E0911 11:43:30.129451    1591 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-zvh8m_kube-system(f99fc8c7-b3f8-47b9-a741-686b6d387773)\"" pod="kube-system/coredns-5dd5756b68-zvh8m" podUID="f99fc8c7-b3f8-47b9-a741-686b6d387773"
	Sep 11 11:43:30 pause-844693 kubelet[1591]: I0911 11:43:30.131699    1591 scope.go:117] "RemoveContainer" containerID="a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	Sep 11 11:43:30 pause-844693 kubelet[1591]: E0911 11:43:30.132167    1591 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-2gn29_kube-system(ade2d2da-baae-423c-8c9a-6294d0d22277)\"" pod="kube-system/coredns-5dd5756b68-2gn29" podUID="ade2d2da-baae-423c-8c9a-6294d0d22277"
	Sep 11 11:43:41 pause-844693 kubelet[1591]: I0911 11:43:41.308080    1591 scope.go:117] "RemoveContainer" containerID="9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c"
	Sep 11 11:43:44 pause-844693 kubelet[1591]: I0911 11:43:44.307854    1591 scope.go:117] "RemoveContainer" containerID="a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.371718    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4f7e675738d17a8392a94f896b1488813c34b49e96e6a3331ae4bb6119b80696/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4f7e675738d17a8392a94f896b1488813c34b49e96e6a3331ae4bb6119b80696/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-844693_ff0209a81991e1d78879d688b130f8c3/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-844693_ff0209a81991e1d78879d688b130f8c3/etcd/0.log: no such file or directory
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.388701    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0abd77e79d5280922a8508cf5962ff9743b3a4068d16612655fce0ce37af6732/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0abd77e79d5280922a8508cf5962ff9743b3a4068d16612655fce0ce37af6732/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-844693_ef0c57dfba35c15e5cae89b29f3aaa26/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-844693_ef0c57dfba35c15e5cae89b29f3aaa26/kube-controller-manager/0.log: no such file or directory
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.389788    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4fa0d889fc399250d288af7158a4353953d563b9a76a1d3b83cc61bb34c3bb6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4fa0d889fc399250d288af7158a4353953d563b9a76a1d3b83cc61bb34c3bb6/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-844693_dd016f978e4d2527ba2db43aba9496e8/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-844693_dd016f978e4d2527ba2db43aba9496e8/kube-apiserver/0.log: no such file or directory
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.399452    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3dec642af2ced99c04002c8a24376b298332cf4be21fe3915b59aae464d8d7bc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3dec642af2ced99c04002c8a24376b298332cf4be21fe3915b59aae464d8d7bc/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-844693_8cc32ea88c75cf7fa9232edbcac5cac2/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-844693_8cc32ea88c75cf7fa9232edbcac5cac2/kube-scheduler/0.log: no such file or directory
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:43:47.059687  342287 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17223-136166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-844693 -n pause-844693
helpers_test.go:261: (dbg) Run:  kubectl --context pause-844693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-844693
helpers_test.go:235: (dbg) docker inspect pause-844693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f",
	        "Created": "2023-09-11T11:42:27.240188919Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 322838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-11T11:42:27.599694483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
	        "ResolvConfPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/hostname",
	        "HostsPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/hosts",
	        "LogPath": "/var/lib/docker/containers/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f-json.log",
	        "Name": "/pause-844693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-844693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-844693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054-init/diff:/var/lib/docker/overlay2/5fefd4c14d5bc4d7d67c2f6371e7160909b1f4d0d9a655e2a127286f8f0bbb5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c19b158e2211578bc0dd001705ce598d0dc4b2ac98547dea0ef6dc6f6b7f2054/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-844693",
	                "Source": "/var/lib/docker/volumes/pause-844693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-844693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-844693",
	                "name.minikube.sigs.k8s.io": "pause-844693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2e8989114acb2afcb6842c5918c1b59f132ddb21924fa1f0153a952d44500d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b2e8989114ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-844693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "19301acdf740",
	                        "pause-844693"
	                    ],
	                    "NetworkID": "816421c11511d905aaf1996ddf2d307ce7959ea60956a7b767ca58a7b283d397",
	                    "EndpointID": "275b1aa06486b89b207c63ab44405821efdc62d48dec642592f40653ed38ee3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-844693 -n pause-844693
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-844693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-844693 logs -n 25: (1.528949691s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| delete  | -p force-systemd-flag-682524          | force-systemd-flag-682524 | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| start   | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-341786 sudo           | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-341786                | NoKubernetes-341786       | jenkins | v1.31.2 | 11 Sep 23 11:39 UTC | 11 Sep 23 11:39 UTC |
	| delete  | -p offline-crio-341798                | offline-crio-341798       | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:40 UTC |
	| start   | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:40 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:40 UTC | 11 Sep 23 11:41 UTC |
	| start   | -p kubernetes-upgrade-872265          | kubernetes-upgrade-872265 | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-782427             | missing-upgrade-782427    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:42 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-822606             | stopped-upgrade-822606    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-822606             | stopped-upgrade-822606    | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	| start   | -p cert-options-645915                | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-645915 ssh               | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-645915 -- sudo        | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-645915                | cert-options-645915       | jenkins | v1.31.2 | 11 Sep 23 11:41 UTC | 11 Sep 23 11:41 UTC |
	| delete  | -p missing-upgrade-782427             | missing-upgrade-782427    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	| start   | -p pause-844693 --memory=2048         | pause-844693              | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:43 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-352590             | cert-expiration-352590    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-398660             | running-upgrade-398660    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-398660             | running-upgrade-398660    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:42 UTC |
	| delete  | -p cert-expiration-352590             | cert-expiration-352590    | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC | 11 Sep 23 11:43 UTC |
	| start   | -p auto-917885 --memory=3072          | auto-917885               | jenkins | v1.31.2 | 11 Sep 23 11:42 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kindnet-917885                     | kindnet-917885            | jenkins | v1.31.2 | 11 Sep 23 11:43 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-844693                       | pause-844693              | jenkins | v1.31.2 | 11 Sep 23 11:43 UTC | 11 Sep 23 11:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:43:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:43:04.042799  333486 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:43:04.042943  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.042952  333486 out.go:309] Setting ErrFile to fd 2...
	I0911 11:43:04.042957  333486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:43:04.043166  333486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:43:04.043735  333486 out.go:303] Setting JSON to false
	I0911 11:43:04.045353  333486 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5132,"bootTime":1694427452,"procs":842,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:43:04.045427  333486 start.go:138] virtualization: kvm guest
	I0911 11:43:04.090864  333486 out.go:177] * [pause-844693] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:43:04.167289  333486 notify.go:220] Checking for updates...
	I0911 11:43:04.232968  333486 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:43:04.303428  333486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:43:04.402314  333486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:43:04.434527  333486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:43:04.498400  333486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:43:04.560250  333486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:42:59.890833  332029 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0911 11:42:59.891074  332029 start.go:159] libmachine.API.Create for "auto-917885" (driver="docker")
	I0911 11:42:59.891097  332029 client.go:168] LocalClient.Create starting
	I0911 11:42:59.891149  332029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:42:59.891178  332029 main.go:141] libmachine: Decoding PEM data...
	I0911 11:42:59.891194  332029 main.go:141] libmachine: Parsing certificate...
	I0911 11:42:59.891251  332029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:42:59.891269  332029 main.go:141] libmachine: Decoding PEM data...
	I0911 11:42:59.891277  332029 main.go:141] libmachine: Parsing certificate...
	I0911 11:42:59.891579  332029 cli_runner.go:164] Run: docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:42:59.908806  332029 cli_runner.go:211] docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:42:59.908882  332029 network_create.go:281] running [docker network inspect auto-917885] to gather additional debugging logs...
	I0911 11:42:59.908907  332029 cli_runner.go:164] Run: docker network inspect auto-917885
	W0911 11:42:59.927118  332029 cli_runner.go:211] docker network inspect auto-917885 returned with exit code 1
	I0911 11:42:59.927161  332029 network_create.go:284] error running [docker network inspect auto-917885]: docker network inspect auto-917885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-917885 not found
	I0911 11:42:59.927185  332029 network_create.go:286] output of [docker network inspect auto-917885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-917885 not found
	
	** /stderr **
	I0911 11:42:59.927239  332029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:42:59.946001  332029 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20e875ef8442 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d7:c6:0a:5c} reservation:<nil>}
	I0911 11:42:59.946764  332029 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40f62e59100c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ee:21:f8:bd} reservation:<nil>}
	I0911 11:42:59.947517  332029 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a151a90a714a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:1f:ed:6f:6b} reservation:<nil>}
	I0911 11:42:59.948366  332029 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-816421c11511 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:0b:f6:61:1a} reservation:<nil>}
	I0911 11:42:59.950079  332029 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001709dd0}
	I0911 11:42:59.950167  332029 network_create.go:123] attempt to create docker network auto-917885 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0911 11:42:59.950245  332029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-917885 auto-917885
	I0911 11:43:00.009154  332029 network_create.go:107] docker network auto-917885 192.168.85.0/24 created
	I0911 11:43:00.009194  332029 kic.go:117] calculated static IP "192.168.85.2" for the "auto-917885" container
	I0911 11:43:00.009272  332029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:43:00.028336  332029 cli_runner.go:164] Run: docker volume create auto-917885 --label name.minikube.sigs.k8s.io=auto-917885 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:43:00.051685  332029 oci.go:103] Successfully created a docker volume auto-917885
	I0911 11:43:00.051779  332029 cli_runner.go:164] Run: docker run --rm --name auto-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-917885 --entrypoint /usr/bin/test -v auto-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:43:00.894980  332029 oci.go:107] Successfully prepared a docker volume auto-917885
	I0911 11:43:00.895021  332029 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:00.895048  332029 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:43:00.895132  332029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:43:04.579049  332029 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (3.683860902s)
	I0911 11:43:04.579086  332029 kic.go:199] duration metric: took 3.684033 seconds to extract preloaded images to volume
	W0911 11:43:04.579270  332029 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:43:04.579408  332029 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:43:04.562773  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:04.563359  333486 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:43:04.587033  333486 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:43:04.587145  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.683210  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:74 SystemTime:2023-09-11 11:43:04.673064676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.683352  333486 docker.go:294] overlay module found
	I0911 11:43:04.685735  333486 out.go:177] * Using the docker driver based on existing profile
	I0911 11:43:04.687310  333486 start.go:298] selected driver: docker
	I0911 11:43:04.687331  333486 start.go:902] validating driver "docker" against &{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.687471  333486 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:43:04.687538  333486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:43:04.777334  333486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:80 SystemTime:2023-09-11 11:43:04.765950538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:43:04.778260  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:04.778281  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:04.778296  333486 start_flags.go:321] config:
	{Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:04.782003  333486 out.go:177] * Starting control plane node pause-844693 in cluster pause-844693
	I0911 11:43:04.783593  333486 cache.go:122] Beginning downloading kic base image for docker with crio
	I0911 11:43:04.785125  333486 out.go:177] * Pulling base image ...
	I0911 11:43:04.786957  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:04.787017  333486 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:43:04.787035  333486 cache.go:57] Caching tarball of preloaded images
	I0911 11:43:04.787103  333486 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
	I0911 11:43:04.787134  333486 preload.go:174] Found /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:43:04.787145  333486 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:43:04.787352  333486 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/config.json ...
	I0911 11:43:04.808052  333486 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
	I0911 11:43:04.808074  333486 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
	I0911 11:43:04.808088  333486 cache.go:195] Successfully downloaded all kic artifacts
	I0911 11:43:04.808120  333486 start.go:365] acquiring machines lock for pause-844693: {Name:mk61e59c2f16fc85e6756af64b9f30077c437f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:43:04.808179  333486 start.go:369] acquired machines lock for "pause-844693" in 41.449µs
	I0911 11:43:04.808195  333486 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:43:04.808200  333486 fix.go:54] fixHost starting: 
	I0911 11:43:04.808411  333486 cli_runner.go:164] Run: docker container inspect pause-844693 --format={{.State.Status}}
	I0911 11:43:04.829433  333486 fix.go:102] recreateIfNeeded on pause-844693: state=Running err=<nil>
	W0911 11:43:04.829467  333486 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:43:04.832863  333486 out.go:177] * Updating the running docker "pause-844693" container ...
	I0911 11:43:01.366401  332971 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0911 11:43:01.366627  332971 start.go:159] libmachine.API.Create for "kindnet-917885" (driver="docker")
	I0911 11:43:01.366653  332971 client.go:168] LocalClient.Create starting
	I0911 11:43:01.366711  332971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem
	I0911 11:43:01.366742  332971 main.go:141] libmachine: Decoding PEM data...
	I0911 11:43:01.366756  332971 main.go:141] libmachine: Parsing certificate...
	I0911 11:43:01.366819  332971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem
	I0911 11:43:01.366837  332971 main.go:141] libmachine: Decoding PEM data...
	I0911 11:43:01.366848  332971 main.go:141] libmachine: Parsing certificate...
	I0911 11:43:01.367146  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0911 11:43:01.386272  332971 cli_runner.go:211] docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0911 11:43:01.386392  332971 network_create.go:281] running [docker network inspect kindnet-917885] to gather additional debugging logs...
	I0911 11:43:01.386423  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885
	W0911 11:43:01.404358  332971 cli_runner.go:211] docker network inspect kindnet-917885 returned with exit code 1
	I0911 11:43:01.404396  332971 network_create.go:284] error running [docker network inspect kindnet-917885]: docker network inspect kindnet-917885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-917885 not found
	I0911 11:43:01.404427  332971 network_create.go:286] output of [docker network inspect kindnet-917885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-917885 not found
	
	** /stderr **
	I0911 11:43:01.404491  332971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:01.424104  332971 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20e875ef8442 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d7:c6:0a:5c} reservation:<nil>}
	I0911 11:43:01.424980  332971 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40f62e59100c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ee:21:f8:bd} reservation:<nil>}
	I0911 11:43:01.425749  332971 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a151a90a714a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:1f:ed:6f:6b} reservation:<nil>}
	I0911 11:43:01.426716  332971 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-816421c11511 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:0b:f6:61:1a} reservation:<nil>}
	I0911 11:43:01.427602  332971 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-32603fed1456 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:6c:c2:0d:6a} reservation:<nil>}
	I0911 11:43:01.428435  332971 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001563ab0}
	I0911 11:43:01.428467  332971 network_create.go:123] attempt to create docker network kindnet-917885 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0911 11:43:01.428526  332971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-917885 kindnet-917885
	I0911 11:43:01.500197  332971 network_create.go:107] docker network kindnet-917885 192.168.94.0/24 created
	I0911 11:43:01.500232  332971 kic.go:117] calculated static IP "192.168.94.2" for the "kindnet-917885" container
	I0911 11:43:01.500333  332971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0911 11:43:01.517738  332971 cli_runner.go:164] Run: docker volume create kindnet-917885 --label name.minikube.sigs.k8s.io=kindnet-917885 --label created_by.minikube.sigs.k8s.io=true
	I0911 11:43:01.538051  332971 oci.go:103] Successfully created a docker volume kindnet-917885
	I0911 11:43:01.538199  332971 cli_runner.go:164] Run: docker run --rm --name kindnet-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --entrypoint /usr/bin/test -v kindnet-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
	I0911 11:43:04.578040  332971 cli_runner.go:217] Completed: docker run --rm --name kindnet-917885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --entrypoint /usr/bin/test -v kindnet-917885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib: (3.03978727s)
	I0911 11:43:04.578075  332971 oci.go:107] Successfully prepared a docker volume kindnet-917885
	I0911 11:43:04.578122  332971 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:04.578148  332971 kic.go:190] Starting extracting preloaded images to volume ...
	I0911 11:43:04.578222  332971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
	I0911 11:43:04.686171  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:04.686587  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:04.686632  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:04.686678  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:04.739051  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:04.739073  306855 cri.go:89] found id: ""
	I0911 11:43:04.739083  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:04.739138  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:04.744552  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:04.744624  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:04.793439  306855 cri.go:89] found id: ""
	I0911 11:43:04.793463  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.793474  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:04.793482  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:04.793537  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:04.833969  306855 cri.go:89] found id: ""
	I0911 11:43:04.833992  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.833999  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:04.834005  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:04.834061  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:04.879099  306855 cri.go:89] found id: ""
	I0911 11:43:04.879129  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.879139  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:04.879185  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:04.879252  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:04.921504  306855 cri.go:89] found id: ""
	I0911 11:43:04.921533  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.921542  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:04.921550  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:04.921616  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:04.959890  306855 cri.go:89] found id: ""
	I0911 11:43:04.959920  306855 logs.go:284] 0 containers: []
	W0911 11:43:04.959931  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:04.959940  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:04.959997  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:05.005187  306855 cri.go:89] found id: ""
	I0911 11:43:05.005218  306855 logs.go:284] 0 containers: []
	W0911 11:43:05.005231  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:05.005239  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:05.005313  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:05.076523  306855 cri.go:89] found id: ""
	I0911 11:43:05.076544  306855 logs.go:284] 0 containers: []
	W0911 11:43:05.076553  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:05.076576  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:05.076643  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:05.107963  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:05.108075  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0911 11:43:05.194411  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0911 11:43:05.194435  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:05.194449  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:05.252239  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:05.252281  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:05.285393  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:05.285447  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:05.337247  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:05.337276  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:04.834633  333486 machine.go:88] provisioning docker machine ...
	I0911 11:43:04.834660  333486 ubuntu.go:169] provisioning hostname "pause-844693"
	I0911 11:43:04.834739  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:04.855490  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:04.855948  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:04.855960  333486 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-844693 && echo "pause-844693" | sudo tee /etc/hostname
	I0911 11:43:05.046950  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844693
	
	I0911 11:43:05.047034  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.076281  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.076960  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.076989  333486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-844693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-844693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-844693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:05.230841  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:05.230869  333486 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:05.230888  333486 ubuntu.go:177] setting up certificates
	I0911 11:43:05.230898  333486 provision.go:83] configureAuth start
	I0911 11:43:05.230963  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:05.256111  333486 provision.go:138] copyHostCerts
	I0911 11:43:05.256165  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:05.256172  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:05.256235  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:05.256331  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:05.256338  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:05.256361  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:05.256410  333486 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:05.256414  333486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:05.256433  333486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:05.256475  333486 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.pause-844693 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-844693]
	I0911 11:43:05.606200  333486 provision.go:172] copyRemoteCerts
	I0911 11:43:05.606281  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:05.606333  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.624128  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:05.721139  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:05.743381  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 11:43:05.805366  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:43:05.829471  333486 provision.go:86] duration metric: configureAuth took 598.55837ms
	I0911 11:43:05.829497  333486 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:05.829731  333486 config.go:182] Loaded profile config "pause-844693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:05.829841  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:05.847201  333486 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.847619  333486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33107 <nil> <nil>}
	I0911 11:43:05.847639  333486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:04.672319  332029 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-917885 --name auto-917885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-917885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-917885 --network auto-917885 --ip 192.168.85.2 --volume auto-917885:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:43:05.061375  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Running}}
	I0911 11:43:05.086925  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.116366  332029 cli_runner.go:164] Run: docker exec auto-917885 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:43:05.168142  332029 oci.go:144] the created container "auto-917885" has a running status.
	I0911 11:43:05.168178  332029 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa...
	I0911 11:43:05.330664  332029 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:43:05.356164  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.380464  332029 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:43:05.380489  332029 kic_runner.go:114] Args: [docker exec --privileged auto-917885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:43:05.463815  332029 cli_runner.go:164] Run: docker container inspect auto-917885 --format={{.State.Status}}
	I0911 11:43:05.485171  332029 machine.go:88] provisioning docker machine ...
	I0911 11:43:05.485217  332029 ubuntu.go:169] provisioning hostname "auto-917885"
	I0911 11:43:05.485285  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:05.510336  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:05.511014  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:05.511047  332029 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-917885 && echo "auto-917885" | sudo tee /etc/hostname
	I0911 11:43:05.511774  332029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36070->127.0.0.1:33112: read: connection reset by peer
	I0911 11:43:08.681008  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-917885
	
	I0911 11:43:08.681110  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:08.701951  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:08.702660  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:08.702695  332029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-917885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-917885/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-917885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:08.834321  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:08.834351  332029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:08.834394  332029 ubuntu.go:177] setting up certificates
	I0911 11:43:08.834407  332029 provision.go:83] configureAuth start
	I0911 11:43:08.834458  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:08.853165  332029 provision.go:138] copyHostCerts
	I0911 11:43:08.853229  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:08.853238  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:08.853317  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:08.853400  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:08.853404  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:08.853430  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:08.853480  332029 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:08.853484  332029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:08.853502  332029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:08.853542  332029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.auto-917885 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube auto-917885]
	I0911 11:43:09.205329  332029 provision.go:172] copyRemoteCerts
	I0911 11:43:09.205412  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:09.205460  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.223875  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.321383  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:09.347511  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0911 11:43:09.371634  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:43:09.395019  332029 provision.go:86] duration metric: configureAuth took 560.593148ms
	I0911 11:43:09.395046  332029 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:09.395245  332029 config.go:182] Loaded profile config "auto-917885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:09.395377  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.413145  332029 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:09.413547  332029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33112 <nil> <nil>}
	I0911 11:43:09.413563  332029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:09.632896  332029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:09.632922  332029 machine.go:91] provisioned docker machine in 4.147727191s
	I0911 11:43:09.632931  332029 client.go:171] LocalClient.Create took 9.741829209s
	I0911 11:43:09.632948  332029 start.go:167] duration metric: libmachine.API.Create for "auto-917885" took 9.741873554s
	I0911 11:43:09.632956  332029 start.go:300] post-start starting for "auto-917885" (driver="docker")
	I0911 11:43:09.632967  332029 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:09.633042  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:09.633087  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.650687  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.743242  332029 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:09.746542  332029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:09.746582  332029 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:09.746622  332029 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:09.746636  332029 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:09.746650  332029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:09.746717  332029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:09.746810  332029 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:09.746920  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:09.755141  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:09.777523  332029 start.go:303] post-start completed in 144.551819ms
	I0911 11:43:09.777932  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:09.795105  332029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/config.json ...
	I0911 11:43:09.795362  332029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:09.795405  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.812704  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.903024  332029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:09.907160  332029 start.go:128] duration metric: createHost completed in 10.01850521s
	I0911 11:43:09.907195  332029 start.go:83] releasing machines lock for "auto-917885", held for 10.018698576s
	I0911 11:43:09.907265  332029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-917885
	I0911 11:43:09.924513  332029 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:09.924558  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.924622  332029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:09.924691  332029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-917885
	I0911 11:43:09.942032  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:09.943207  332029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33112 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/auto-917885/id_rsa Username:docker}
	I0911 11:43:10.119101  332029 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:10.123422  332029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:10.263704  332029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:10.268064  332029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:10.286060  332029 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:10.286172  332029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:10.313886  332029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:43:10.313909  332029 start.go:466] detecting cgroup driver to use...
	I0911 11:43:10.313939  332029 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:10.313979  332029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:10.328259  332029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:10.338639  332029 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:10.338715  332029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:10.350904  332029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:10.364059  332029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:10.439750  332029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:10.515881  332029 docker.go:212] disabling docker service ...
	I0911 11:43:10.515940  332029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:10.533588  332029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:10.544152  332029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:10.627817  332029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:10.716047  332029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:10.726750  332029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:10.741105  332029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:10.741166  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.749865  332029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:10.749922  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.758917  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.767720  332029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:10.776532  332029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:10.784678  332029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:10.792031  332029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:10.799286  332029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:10.874361  332029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:10.981651  332029 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:10.981707  332029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:10.985258  332029 start.go:534] Will wait 60s for crictl version
	I0911 11:43:10.985299  332029 ssh_runner.go:195] Run: which crictl
	I0911 11:43:10.988407  332029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:11.024605  332029 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:11.024693  332029 ssh_runner.go:195] Run: crio --version
	I0911 11:43:11.065205  332029 ssh_runner.go:195] Run: crio --version
	I0911 11:43:11.110271  332029 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:08.277182  332971 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-917885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (3.698899018s)
	I0911 11:43:08.277236  332971 kic.go:199] duration metric: took 3.699082 seconds to extract preloaded images to volume
	W0911 11:43:08.277396  332971 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0911 11:43:08.277525  332971 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0911 11:43:08.339126  332971 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-917885 --name kindnet-917885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-917885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-917885 --network kindnet-917885 --ip 192.168.94.2 --volume kindnet-917885:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:43:08.696209  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Running}}
	I0911 11:43:08.717474  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.735412  332971 cli_runner.go:164] Run: docker exec kindnet-917885 stat /var/lib/dpkg/alternatives/iptables
	I0911 11:43:08.779586  332971 oci.go:144] the created container "kindnet-917885" has a running status.
	I0911 11:43:08.779624  332971 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa...
	I0911 11:43:08.881366  332971 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0911 11:43:08.904514  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.923295  332971 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0911 11:43:08.923326  332971 kic_runner.go:114] Args: [docker exec --privileged kindnet-917885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0911 11:43:08.977313  332971 cli_runner.go:164] Run: docker container inspect kindnet-917885 --format={{.State.Status}}
	I0911 11:43:08.999407  332971 machine.go:88] provisioning docker machine ...
	I0911 11:43:08.999450  332971 ubuntu.go:169] provisioning hostname "kindnet-917885"
	I0911 11:43:08.999517  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:09.021290  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:09.022008  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:09.022036  332971 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-917885 && echo "kindnet-917885" | sudo tee /etc/hostname
	I0911 11:43:09.022740  332971 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39798->127.0.0.1:33117: read: connection reset by peer
	I0911 11:43:07.931564  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:07.938025  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:07.938085  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:07.938177  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:07.972271  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:07.972291  306855 cri.go:89] found id: ""
	I0911 11:43:07.972297  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:07.972352  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:07.975786  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:07.975837  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:08.009408  306855 cri.go:89] found id: ""
	I0911 11:43:08.009436  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.009445  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:08.009451  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:08.009502  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:08.044449  306855 cri.go:89] found id: ""
	I0911 11:43:08.044485  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.044496  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:08.044504  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:08.044558  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:08.078114  306855 cri.go:89] found id: ""
	I0911 11:43:08.078143  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.078153  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:08.078161  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:08.078218  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:08.112488  306855 cri.go:89] found id: ""
	I0911 11:43:08.112515  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.112522  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:08.112527  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:08.112590  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:08.145800  306855 cri.go:89] found id: ""
	I0911 11:43:08.145826  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.145835  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:08.145841  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:08.145905  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:08.181648  306855 cri.go:89] found id: ""
	I0911 11:43:08.181678  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.181688  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:08.181696  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:08.181757  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:08.238219  306855 cri.go:89] found id: ""
	I0911 11:43:08.238242  306855 logs.go:284] 0 containers: []
	W0911 11:43:08.238249  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:08.238260  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:08.238274  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:08.337476  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:08.337513  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:08.372154  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:08.372192  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0911 11:43:08.463038  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0911 11:43:08.463063  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:08.463076  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:08.505986  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:08.506026  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:08.534161  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:08.534274  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:11.078379  306855 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0911 11:43:11.078811  306855 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0911 11:43:11.078875  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 11:43:11.078937  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 11:43:11.116445  306855 cri.go:89] found id: "b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:11.116470  306855 cri.go:89] found id: ""
	I0911 11:43:11.116480  306855 logs.go:284] 1 containers: [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596]
	I0911 11:43:11.116535  306855 ssh_runner.go:195] Run: which crictl
	I0911 11:43:11.120217  306855 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 11:43:11.120273  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 11:43:11.163427  306855 cri.go:89] found id: ""
	I0911 11:43:11.163451  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.163461  306855 logs.go:286] No container was found matching "etcd"
	I0911 11:43:11.163467  306855 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 11:43:11.163525  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 11:43:11.206377  306855 cri.go:89] found id: ""
	I0911 11:43:11.206402  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.206412  306855 logs.go:286] No container was found matching "coredns"
	I0911 11:43:11.206419  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 11:43:11.206475  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 11:43:11.254480  306855 cri.go:89] found id: ""
	I0911 11:43:11.254522  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.254534  306855 logs.go:286] No container was found matching "kube-scheduler"
	I0911 11:43:11.254542  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 11:43:11.254622  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 11:43:11.299758  306855 cri.go:89] found id: ""
	I0911 11:43:11.299796  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.299807  306855 logs.go:286] No container was found matching "kube-proxy"
	I0911 11:43:11.299816  306855 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 11:43:11.299874  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 11:43:11.363586  306855 cri.go:89] found id: ""
	I0911 11:43:11.363621  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.363632  306855 logs.go:286] No container was found matching "kube-controller-manager"
	I0911 11:43:11.363641  306855 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 11:43:11.363700  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 11:43:11.405113  306855 cri.go:89] found id: ""
	I0911 11:43:11.405133  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.405140  306855 logs.go:286] No container was found matching "kindnet"
	I0911 11:43:11.405145  306855 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 11:43:11.405192  306855 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 11:43:11.482830  306855 cri.go:89] found id: ""
	I0911 11:43:11.482854  306855 logs.go:284] 0 containers: []
	W0911 11:43:11.482863  306855 logs.go:286] No container was found matching "storage-provisioner"
	I0911 11:43:11.482874  306855 logs.go:123] Gathering logs for describe nodes ...
	I0911 11:43:11.482893  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 11:43:11.112173  332029 cli_runner.go:164] Run: docker network inspect auto-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:11.131480  332029 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:11.135231  332029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:11.146900  332029 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:11.146959  332029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:11.210931  332029 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:11.210960  332029 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:11.211011  332029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:11.257774  332029 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:11.257794  332029 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:11.257856  332029 ssh_runner.go:195] Run: crio config
	I0911 11:43:11.320036  332029 cni.go:84] Creating CNI manager for ""
	I0911 11:43:11.320072  332029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:11.320098  332029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:11.320121  332029 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-917885 NodeName:auto-917885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:11.320295  332029 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-917885"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:11.320389  332029 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-917885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:auto-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:11.320457  332029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:11.330713  332029 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:11.330778  332029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:11.339920  332029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0911 11:43:11.356791  332029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:11.379085  332029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0911 11:43:11.398515  332029 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:11.403013  332029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:11.415061  332029 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885 for IP: 192.168.85.2
	I0911 11:43:11.415122  332029 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.415305  332029 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:11.415358  332029 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:11.415434  332029 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key
	I0911 11:43:11.415453  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt with IP's: []
	I0911 11:43:11.726512  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt ...
	I0911 11:43:11.726543  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: {Name:mkac4ac31b98b98f96543b23e868530abc293031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.726762  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key ...
	I0911 11:43:11.726779  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.key: {Name:mkd8011a457ecb6c9a92be3bbc3ddb4af3b9db6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.726876  332029 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c
	I0911 11:43:11.726891  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:43:11.852956  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c ...
	I0911 11:43:11.852987  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c: {Name:mk0db99fc6cf59a4b0bf55893b96a80bfd62b42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.853190  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c ...
	I0911 11:43:11.853205  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c: {Name:mkf40d76208a91167f509bb89a5cd0baee31f7e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:11.853296  332029 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt
	I0911 11:43:11.853391  332029 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key
	I0911 11:43:11.853444  332029 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key
	I0911 11:43:11.853462  332029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt with IP's: []
	I0911 11:43:12.041495  332029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt ...
	I0911 11:43:12.041523  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt: {Name:mk60399ec28d898ea32193c36fb15f7d975e6000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:12.041673  332029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key ...
	I0911 11:43:12.041683  332029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key: {Name:mk6a19de294038c20e55b5fcb30414e7a5745cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:12.041835  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:12.041869  332029 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:12.041879  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:12.041909  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:12.041936  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:12.041958  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:12.041996  332029 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:12.042608  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:12.068485  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:12.099318  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:12.122381  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:12.144796  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:12.170257  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:12.201978  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:12.226414  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:12.248570  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:12.281698  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:12.308814  332029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:12.337544  332029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:12.355271  332029 ssh_runner.go:195] Run: openssl version
	I0911 11:43:12.361375  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:12.372515  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.376164  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.376226  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:12.383499  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:12.393618  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:12.403617  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.407153  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.407217  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:12.414335  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:12.424372  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:12.433178  332029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.436421  332029 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.436468  332029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:12.442613  332029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:12.451231  332029 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:12.454403  332029 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:43:12.454459  332029 kubeadm.go:404] StartCluster: {Name:auto-917885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:12.454552  332029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:12.454603  332029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:12.507699  332029 cri.go:89] found id: ""
	I0911 11:43:12.507786  332029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:43:12.516677  332029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:43:12.524887  332029 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:43:12.524946  332029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:43:12.533165  332029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:43:12.533211  332029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:43:12.586668  332029 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:43:12.586934  332029 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:43:12.629539  332029 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:43:12.629625  332029 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:43:12.629671  332029 kubeadm.go:322] OS: Linux
	I0911 11:43:12.629720  332029 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:43:12.629761  332029 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:43:12.629823  332029 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:43:12.629866  332029 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:43:12.629916  332029 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:43:12.629987  332029 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:43:12.630027  332029 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:43:12.630075  332029 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:43:12.630200  332029 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:43:12.715287  332029 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:43:12.715431  332029 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:43:12.715566  332029 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:43:12.948298  332029 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:43:11.285987  333486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:11.286017  333486 machine.go:91] provisioned docker machine in 6.451367854s
	I0911 11:43:11.286030  333486 start.go:300] post-start starting for "pause-844693" (driver="docker")
	I0911 11:43:11.286042  333486 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:11.286132  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:11.286182  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.307050  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.405300  333486 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:11.408871  333486 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:11.408907  333486 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:11.408920  333486 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:11.408928  333486 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:11.408941  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:11.409004  333486 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:11.409093  333486 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:11.409200  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:11.420179  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:11.446924  333486 start.go:303] post-start completed in 160.874894ms
	I0911 11:43:11.446998  333486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:11.447044  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.468260  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.582593  333486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:11.589173  333486 fix.go:56] fixHost completed within 6.780964082s
	I0911 11:43:11.589199  333486 start.go:83] releasing machines lock for "pause-844693", held for 6.781009426s
	I0911 11:43:11.589270  333486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844693
	I0911 11:43:11.613924  333486 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:11.613979  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.613990  333486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:11.614042  333486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844693
	I0911 11:43:11.636822  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:11.639682  333486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33107 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/pause-844693/id_rsa Username:docker}
	I0911 11:43:12.162675  333486 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:12.168474  333486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:12.464323  333486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:12.472149  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.483211  333486 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:12.483296  333486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:12.495300  333486 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:43:12.495326  333486 start.go:466] detecting cgroup driver to use...
	I0911 11:43:12.495359  333486 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:12.495407  333486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:12.571831  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:12.587063  333486 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:12.587110  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:12.607400  333486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:12.669671  333486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:12.980146  333486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:13.261608  333486 docker.go:212] disabling docker service ...
	I0911 11:43:13.261672  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:13.277615  333486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:13.292503  333486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:13.664743  333486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:13.894906  333486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:13.909934  333486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:13.972738  333486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:13.972803  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.987443  333486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:13.987507  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:13.999727  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.011903  333486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:12.951479  332029 out.go:204]   - Generating certificates and keys ...
	I0911 11:43:12.951651  332029 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:43:12.951725  332029 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:43:13.212510  332029 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:43:13.380152  332029 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:43:13.558298  332029 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:43:13.744661  332029 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:43:13.939764  332029 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:43:13.939977  332029 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-917885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0911 11:43:14.184252  332029 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:43:14.184445  332029 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-917885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0911 11:43:14.266337  332029 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:43:14.419439  332029 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:43:12.175779  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-917885
	
	I0911 11:43:12.175864  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.203871  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:12.204276  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:12.204289  332971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-917885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-917885/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-917885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:43:12.342026  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:43:12.342056  332971 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17223-136166/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-136166/.minikube}
	I0911 11:43:12.342082  332971 ubuntu.go:177] setting up certificates
	I0911 11:43:12.342114  332971 provision.go:83] configureAuth start
	I0911 11:43:12.342177  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:12.359887  332971 provision.go:138] copyHostCerts
	I0911 11:43:12.359957  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem, removing ...
	I0911 11:43:12.359968  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem
	I0911 11:43:12.360048  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/ca.pem (1082 bytes)
	I0911 11:43:12.360211  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem, removing ...
	I0911 11:43:12.360221  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem
	I0911 11:43:12.360258  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/cert.pem (1123 bytes)
	I0911 11:43:12.360375  332971 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem, removing ...
	I0911 11:43:12.360383  332971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem
	I0911 11:43:12.360419  332971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-136166/.minikube/key.pem (1679 bytes)
	I0911 11:43:12.360551  332971 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem org=jenkins.kindnet-917885 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-917885]
	I0911 11:43:12.646544  332971 provision.go:172] copyRemoteCerts
	I0911 11:43:12.646611  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:43:12.646647  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.676089  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:12.780659  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:43:12.808561  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0911 11:43:12.832784  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:43:12.855894  332971 provision.go:86] duration metric: configureAuth took 513.760393ms
	I0911 11:43:12.855923  332971 ubuntu.go:193] setting minikube options for container-runtime
	I0911 11:43:12.856117  332971 config.go:182] Loaded profile config "kindnet-917885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:43:12.856227  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:12.882601  332971 main.go:141] libmachine: Using SSH client type: native
	I0911 11:43:12.883252  332971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 127.0.0.1 33117 <nil> <nil>}
	I0911 11:43:12.883278  332971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:43:13.147541  332971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:43:13.147571  332971 machine.go:91] provisioned docker machine in 4.148135425s
	I0911 11:43:13.147582  332971 client.go:171] LocalClient.Create took 11.780924164s
	I0911 11:43:13.147601  332971 start.go:167] duration metric: libmachine.API.Create for "kindnet-917885" took 11.780973851s
	I0911 11:43:13.147611  332971 start.go:300] post-start starting for "kindnet-917885" (driver="docker")
	I0911 11:43:13.147622  332971 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:43:13.147684  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:43:13.147725  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.169512  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.273841  332971 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:43:13.277733  332971 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0911 11:43:13.277779  332971 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0911 11:43:13.277800  332971 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0911 11:43:13.277809  332971 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0911 11:43:13.277822  332971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/addons for local assets ...
	I0911 11:43:13.277883  332971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-136166/.minikube/files for local assets ...
	I0911 11:43:13.277978  332971 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem -> 1434172.pem in /etc/ssl/certs
	I0911 11:43:13.278123  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:43:13.289212  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:13.316654  332971 start.go:303] post-start completed in 169.029243ms
	I0911 11:43:13.316995  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:13.333622  332971 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/config.json ...
	I0911 11:43:13.333945  332971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:43:13.333998  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.351639  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.446901  332971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0911 11:43:13.451110  332971 start.go:128] duration metric: createHost completed in 12.087115886s
	I0911 11:43:13.451135  332971 start.go:83] releasing machines lock for "kindnet-917885", held for 12.087293995s
	I0911 11:43:13.451204  332971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-917885
	I0911 11:43:13.475139  332971 ssh_runner.go:195] Run: cat /version.json
	I0911 11:43:13.475156  332971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:43:13.475197  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.475217  332971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-917885
	I0911 11:43:13.502685  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.506202  332971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33117 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/kindnet-917885/id_rsa Username:docker}
	I0911 11:43:13.686524  332971 ssh_runner.go:195] Run: systemctl --version
	I0911 11:43:13.691782  332971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:43:13.834658  332971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:43:13.838901  332971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:13.857419  332971 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0911 11:43:13.857504  332971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:43:13.889783  332971 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0911 11:43:13.889807  332971 start.go:466] detecting cgroup driver to use...
	I0911 11:43:13.889839  332971 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0911 11:43:13.889887  332971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:43:13.911298  332971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:43:13.922049  332971 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:43:13.922143  332971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:43:13.935275  332971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:43:13.948828  332971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:43:14.038870  332971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:43:14.132803  332971 docker.go:212] disabling docker service ...
	I0911 11:43:14.132867  332971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:43:14.153826  332971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:43:14.168806  332971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:43:14.254891  332971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:43:14.353671  332971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:43:14.365017  332971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:43:14.380992  332971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:43:14.381053  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.390515  332971 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:43:14.390601  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.399509  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.408420  332971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:43:14.418034  332971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:14.426909  332971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:14.434708  332971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:14.442150  332971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:14.532616  332971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:14.642577  332971 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:14.642649  332971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:14.646131  332971 start.go:534] Will wait 60s for crictl version
	I0911 11:43:14.646189  332971 ssh_runner.go:195] Run: which crictl
	I0911 11:43:14.649268  332971 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:14.687495  332971 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:14.687576  332971 ssh_runner.go:195] Run: crio --version
	I0911 11:43:14.721799  332971 ssh_runner.go:195] Run: crio --version
	I0911 11:43:14.759450  332971 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:14.696099  332029 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:43:14.696225  332029 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:43:14.974410  332029 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:43:15.031360  332029 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:43:15.299732  332029 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:43:15.398904  332029 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:43:15.399368  332029 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:43:15.401605  332029 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:43:14.760871  332971 cli_runner.go:164] Run: docker network inspect kindnet-917885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:14.777778  332971 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:14.781248  332971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:14.791372  332971 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:14.791435  332971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:14.841708  332971 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:14.841728  332971 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:14.841772  332971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:14.874988  332971 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:14.875007  332971 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:14.875060  332971 ssh_runner.go:195] Run: crio config
	I0911 11:43:14.924211  332971 cni.go:84] Creating CNI manager for "kindnet"
	I0911 11:43:14.924245  332971 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:14.924264  332971 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-917885 NodeName:kindnet-917885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:14.924390  332971 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-917885"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:14.924456  332971 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kindnet-917885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:kindnet-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:14.924510  332971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:14.933032  332971 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:14.933097  332971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:14.941039  332971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (424 bytes)
	I0911 11:43:14.957743  332971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:14.975098  332971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0911 11:43:14.992439  332971 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:14.995785  332971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:43:15.006543  332971 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885 for IP: 192.168.94.2
	I0911 11:43:15.006576  332971 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.006744  332971 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:15.006806  332971 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:15.006860  332971 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key
	I0911 11:43:15.006881  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt with IP's: []
	I0911 11:43:15.500709  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt ...
	I0911 11:43:15.500739  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: {Name:mk184d328255d58730b1965ed92467ece818018a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.500915  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key ...
	I0911 11:43:15.500929  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.key: {Name:mk8607643ccf2f9e1d15a7c037e1efa764518611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.501031  332971 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a
	I0911 11:43:15.501050  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:43:15.596310  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a ...
	I0911 11:43:15.596347  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a: {Name:mk751e13964bc37fa4a76f7995d79f02afa0a9e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.596566  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a ...
	I0911 11:43:15.596602  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a: {Name:mkdee36fc4b75a13c92537734e4550c412c1cbaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.596701  332971 certs.go:337] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt
	I0911 11:43:15.596804  332971 certs.go:341] copying /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key
	I0911 11:43:15.596875  332971 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key
	I0911 11:43:15.596896  332971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt with IP's: []
	I0911 11:43:15.991782  332971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt ...
	I0911 11:43:15.991814  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt: {Name:mk58cdbbc2657bead8f89c4f146e8867a51970ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.992022  332971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key ...
	I0911 11:43:15.992042  332971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key: {Name:mk378e269914afe4c94d09dbb1c953b4b89df556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:15.992265  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:15.992326  332971 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:15.992344  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:15.992377  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:15.992413  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:15.992450  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:15.992505  332971 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:15.993101  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:16.016416  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:16.043237  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:16.066974  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:16.088527  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:16.110440  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:16.131946  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:16.154788  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:16.176596  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:16.197916  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:16.218954  332971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:16.241388  332971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:16.257192  332971 ssh_runner.go:195] Run: openssl version
	I0911 11:43:16.262303  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:16.270779  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.273879  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.273936  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:16.280017  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:16.288370  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:16.297025  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.300167  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.300225  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:16.306383  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:16.315021  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:16.323475  332971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.326930  332971 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.326990  332971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:16.333804  332971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:16.343188  332971 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:16.346425  332971 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:43:16.346487  332971 kubeadm.go:404] StartCluster: {Name:kindnet-917885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-917885 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:16.346561  332971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:16.346602  332971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:16.379741  332971 cri.go:89] found id: ""
	I0911 11:43:16.379813  332971 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:43:16.388030  332971 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:43:16.396200  332971 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0911 11:43:16.396256  332971 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:43:16.403992  332971 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:43:16.404033  332971 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0911 11:43:16.448187  332971 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:43:16.448283  332971 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:43:16.488076  332971 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0911 11:43:16.488210  332971 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0911 11:43:16.488280  332971 kubeadm.go:322] OS: Linux
	I0911 11:43:16.488351  332971 kubeadm.go:322] CGROUPS_CPU: enabled
	I0911 11:43:16.488416  332971 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0911 11:43:16.488509  332971 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0911 11:43:16.488593  332971 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0911 11:43:16.488662  332971 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0911 11:43:16.488730  332971 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0911 11:43:16.488788  332971 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0911 11:43:16.488866  332971 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0911 11:43:16.488946  332971 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0911 11:43:16.566159  332971 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:43:16.566309  332971 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:43:16.566435  332971 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:43:16.830495  332971 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:43:14.058801  333486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:43:14.068855  333486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:43:14.080417  333486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:43:14.092809  333486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:43:14.303657  333486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:43:16.833610  332971 out.go:204]   - Generating certificates and keys ...
	I0911 11:43:16.833688  332971 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:43:16.833743  332971 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:43:16.907374  332971 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:43:16.974579  332971 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:43:17.273372  332971 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:43:17.438573  332971 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:43:18.062622  332971 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:43:18.063014  332971 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-917885 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0911 11:43:18.224739  332971 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:43:18.224925  332971 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-917885 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0911 11:43:18.487293  332971 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:43:18.587347  332971 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:43:18.758385  332971 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:43:18.758528  332971 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:43:18.912595  332971 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:43:19.188710  332971 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:43:19.374728  332971 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:43:19.538312  332971 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:43:19.538689  332971 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:43:19.541780  332971 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:43:15.404982  332029 out.go:204]   - Booting up control plane ...
	I0911 11:43:15.405172  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:43:15.405286  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:43:15.405347  332029 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:43:15.413262  332029 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:43:15.415167  332029 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:43:15.415247  332029 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:43:15.494868  332029 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:43:19.544126  332971 out.go:204]   - Booting up control plane ...
	I0911 11:43:19.544313  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:43:19.544416  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:43:19.544505  332971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:43:19.552820  332971 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:43:19.553600  332971 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:43:19.553644  332971 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:43:19.634899  332971 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:43:20.496972  332029 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002214 seconds
	I0911 11:43:20.497155  332029 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:43:20.509465  332029 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:43:21.033834  332029 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:43:21.034134  332029 kubeadm.go:322] [mark-control-plane] Marking the node auto-917885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:43:21.543489  332029 kubeadm.go:322] [bootstrap-token] Using token: hlx2xk.l6ot2giuv12spqqx
	I0911 11:43:21.545230  332029 out.go:204]   - Configuring RBAC rules ...
	I0911 11:43:21.545391  332029 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:43:21.549047  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:43:21.557345  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:43:21.561844  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:43:21.564975  332029 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:43:21.567957  332029 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:43:21.580480  332029 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:43:21.847586  332029 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:43:21.965929  332029 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:43:21.967538  332029 kubeadm.go:322] 
	I0911 11:43:21.967617  332029 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:43:21.967624  332029 kubeadm.go:322] 
	I0911 11:43:21.967717  332029 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:43:21.967724  332029 kubeadm.go:322] 
	I0911 11:43:21.967753  332029 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:43:21.967818  332029 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:43:21.967880  332029 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:43:21.967887  332029 kubeadm.go:322] 
	I0911 11:43:21.967951  332029 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 11:43:21.967958  332029 kubeadm.go:322] 
	I0911 11:43:21.968014  332029 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:43:21.968021  332029 kubeadm.go:322] 
	I0911 11:43:21.968087  332029 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:43:21.968183  332029 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:43:21.968271  332029 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:43:21.968278  332029 kubeadm.go:322] 
	I0911 11:43:21.968373  332029 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:43:21.968459  332029 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:43:21.968464  332029 kubeadm.go:322] 
	I0911 11:43:21.968561  332029 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hlx2xk.l6ot2giuv12spqqx \
	I0911 11:43:21.968680  332029 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 \
	I0911 11:43:21.968705  332029 kubeadm.go:322] 	--control-plane 
	I0911 11:43:21.968711  332029 kubeadm.go:322] 
	I0911 11:43:21.968811  332029 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:43:21.968818  332029 kubeadm.go:322] 
	I0911 11:43:21.968917  332029 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hlx2xk.l6ot2giuv12spqqx \
	I0911 11:43:21.969042  332029 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:29dd5c485ccb33701674fbb0d9adfdc876e5b65e82ced9970c93ac8717b7a347 
	I0911 11:43:21.971960  332029 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0911 11:43:21.972127  332029 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:43:21.972151  332029 cni.go:84] Creating CNI manager for ""
	I0911 11:43:21.972159  332029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:21.974181  332029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:43:22.396779  333486 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.093069315s)
	I0911 11:43:22.396818  333486 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:43:22.396886  333486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:43:22.400563  333486 start.go:534] Will wait 60s for crictl version
	I0911 11:43:22.400646  333486 ssh_runner.go:195] Run: which crictl
	I0911 11:43:22.404691  333486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:43:22.457728  333486 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0911 11:43:22.457810  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.503118  333486 ssh_runner.go:195] Run: crio --version
	I0911 11:43:22.546371  333486 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0911 11:43:21.573218  306855 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.090302165s)
	W0911 11:43:21.573260  306855 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0911 11:43:21.573271  306855 logs.go:123] Gathering logs for kube-apiserver [b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596] ...
	I0911 11:43:21.573284  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7f54e8d3985de06b7fea85e2d43d2c57b87e1c8541ffc22fbb1a69c05525596"
	I0911 11:43:21.622868  306855 logs.go:123] Gathering logs for CRI-O ...
	I0911 11:43:21.622917  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 11:43:21.651460  306855 logs.go:123] Gathering logs for container status ...
	I0911 11:43:21.651498  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 11:43:21.701273  306855 logs.go:123] Gathering logs for kubelet ...
	I0911 11:43:21.701309  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 11:43:21.784515  306855 logs.go:123] Gathering logs for dmesg ...
	I0911 11:43:21.784569  306855 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 11:43:22.548178  333486 cli_runner.go:164] Run: docker network inspect pause-844693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0911 11:43:22.567958  333486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0911 11:43:22.572012  333486 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:43:22.572084  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.620455  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.620481  333486 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:43:22.620536  333486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:43:22.660449  333486 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:43:22.660474  333486 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:43:22.660545  333486 ssh_runner.go:195] Run: crio config
	I0911 11:43:22.730066  333486 cni.go:84] Creating CNI manager for ""
	I0911 11:43:22.730098  333486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0911 11:43:22.730121  333486 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:43:22.730144  333486 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-844693 NodeName:pause-844693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:43:22.730297  333486 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-844693"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:43:22.730362  333486 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-844693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:43:22.730410  333486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:43:22.739348  333486 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:43:22.739429  333486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:43:22.747871  333486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0911 11:43:22.764503  333486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:43:22.782985  333486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0911 11:43:22.805153  333486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0911 11:43:22.808703  333486 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693 for IP: 192.168.76.2
	I0911 11:43:22.808734  333486 certs.go:190] acquiring lock for shared ca certs: {Name:mk582952512be9164e5fce9dc802f18bafa97346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:43:22.808896  333486 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key
	I0911 11:43:22.808951  333486 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key
	I0911 11:43:22.809052  333486 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/client.key
	I0911 11:43:22.809142  333486 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key.31bdca25
	I0911 11:43:22.809227  333486 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key
	I0911 11:43:22.809368  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem (1338 bytes)
	W0911 11:43:22.809404  333486 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417_empty.pem, impossibly tiny 0 bytes
	I0911 11:43:22.809431  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca-key.pem (1679 bytes)
	I0911 11:43:22.809466  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:43:22.809502  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:43:22.809536  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/certs/home/jenkins/minikube-integration/17223-136166/.minikube/certs/key.pem (1679 bytes)
	I0911 11:43:22.809715  333486 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem (1708 bytes)
	I0911 11:43:22.810561  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:43:22.842061  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:43:22.870043  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:43:22.904154  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/pause-844693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:43:22.934359  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:43:22.959538  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0911 11:43:22.991252  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:43:23.026499  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:43:23.051888  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:43:23.095945  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/certs/143417.pem --> /usr/share/ca-certificates/143417.pem (1338 bytes)
	I0911 11:43:23.121500  333486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/ssl/certs/1434172.pem --> /usr/share/ca-certificates/1434172.pem (1708 bytes)
	I0911 11:43:23.145698  333486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:43:23.171429  333486 ssh_runner.go:195] Run: openssl version
	I0911 11:43:23.178842  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:43:23.194020  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197914  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.197971  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:43:23.204630  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:43:23.214542  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143417.pem && ln -fs /usr/share/ca-certificates/143417.pem /etc/ssl/certs/143417.pem"
	I0911 11:43:23.223895  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227164  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:15 /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.227244  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143417.pem
	I0911 11:43:23.234043  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143417.pem /etc/ssl/certs/51391683.0"
	I0911 11:43:23.243089  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434172.pem && ln -fs /usr/share/ca-certificates/1434172.pem /etc/ssl/certs/1434172.pem"
	I0911 11:43:23.254388  333486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258040  333486 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:15 /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.258167  333486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434172.pem
	I0911 11:43:23.268875  333486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1434172.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:43:23.279175  333486 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:43:23.283065  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:43:23.290568  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:43:23.298395  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:43:23.306718  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:43:23.314616  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:43:23.322227  333486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:43:23.329995  333486 kubeadm.go:404] StartCluster: {Name:pause-844693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-844693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:43:23.330192  333486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:43:23.330276  333486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:43:23.366608  333486 cri.go:89] found id: "cdf9aa78109f17bfdb382122a5728c8159ea39b39801dbd64eb80d2483cc2cab"
	I0911 11:43:23.366640  333486 cri.go:89] found id: "fdb91a124a6a570b2436748b4ba6a86b898e9d6a13a3930db525639b7ccf74fd"
	I0911 11:43:23.366647  333486 cri.go:89] found id: "aa9227286c98956417f65ee195d8cc9c096f779ac33dd93e51ec1f63e9c64727"
	I0911 11:43:23.366653  333486 cri.go:89] found id: "76d35a166fd5d8b00d62567d0e510be9f811d2a2733ee48dbe533273800db765"
	I0911 11:43:23.366658  333486 cri.go:89] found id: "9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c"
	I0911 11:43:23.366665  333486 cri.go:89] found id: "0885e2fcf44f13ce18fb0b2e5369f657935199c74ef3bb6c3f7d944dd92c903f"
	I0911 11:43:23.366670  333486 cri.go:89] found id: "a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	I0911 11:43:23.366675  333486 cri.go:89] found id: "43b750852cf7cf1ba60fa8e429fff93606a5b2db68b62a2e96080df44d120808"
	I0911 11:43:23.366681  333486 cri.go:89] found id: "98d435edeb4433e8035865016ccf3816a70447275adc8b069cb74e222026044b"
	I0911 11:43:23.366708  333486 cri.go:89] found id: "385f7e6d1f77e5b71772a46ca4a4f24f678c2c4c31f7b142a7d3c41c599e0115"
	I0911 11:43:23.366721  333486 cri.go:89] found id: "abcad4a868fa9e3492e9b8da9cdb9c09be851280ca45cb057ad2790cfbe873f4"
	I0911 11:43:23.366727  333486 cri.go:89] found id: "b3946a720abf45cb0400edf2961b8177cee7ded0d89a67215949fba8eed0285f"
	I0911 11:43:23.366738  333486 cri.go:89] found id: "1de4fb6c7d34a7290d7a4ddb1c1dcc8c2f6b06fbd043dab5a2b4c9385bee8829"
	I0911 11:43:23.366744  333486 cri.go:89] found id: "a131faaa13e53100059367ccbeb807c8ca911aaee113f897c694d56b0847b530"
	I0911 11:43:23.366759  333486 cri.go:89] found id: "dbe08d5d45acc84a41457fc5fd2e252933fc14c88b84fb18bb6d48ae40109115"
	I0911 11:43:23.366764  333486 cri.go:89] found id: "dbd37dfbd8007b159842812dbf088fe24d51c704801c40d390145bd3ef1ee2b7"
	I0911 11:43:23.366773  333486 cri.go:89] found id: ""
	I0911 11:43:23.366819  333486 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.309763998Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a75d97a-8eb4-4831-ae6f-69135afac812 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.310627948Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-zvh8m/coredns" id=0d45080a-b153-40df-a6f6-83b9329c49ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.310712569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.322970026Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e79c00afa1225c4c51b21280de83a6015eef9423c76e55397281bd1e634fce2c/merged/etc/passwd: no such file or directory"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.323020868Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e79c00afa1225c4c51b21280de83a6015eef9423c76e55397281bd1e634fce2c/merged/etc/group: no such file or directory"
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.378603079Z" level=info msg="Created container b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3: kube-system/coredns-5dd5756b68-zvh8m/coredns" id=0d45080a-b153-40df-a6f6-83b9329c49ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.379196632Z" level=info msg="Starting container: b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3" id=d09b7ac3-f4f2-4657-9da3-a9f43d36d26e name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:43:41 pause-844693 crio[3183]: time="2023-09-11 11:43:41.388045015Z" level=info msg="Started container" PID=4114 containerID=b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3 description=kube-system/coredns-5dd5756b68-zvh8m/coredns id=d09b7ac3-f4f2-4657-9da3-a9f43d36d26e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.308660150Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=1625a166-0c0e-4415-8ad1-33bb87c75a66 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.308862663Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1625a166-0c0e-4415-8ad1-33bb87c75a66 name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.309651949Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=589b0da9-909a-4206-9c42-7f0d83bfed7f name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.309847666Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=589b0da9-909a-4206-9c42-7f0d83bfed7f name=/runtime.v1.ImageService/ImageStatus
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.310859010Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-2gn29/coredns" id=e46f2c02-73b9-4f0a-b8e0-4162c28e1512 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.310949034Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.323021313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/388c9dd6b22665f6960c5e0c86c5ca48667aafd5e93a27bfb89615fc5fcc150a/merged/etc/passwd: no such file or directory"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.323073076Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/388c9dd6b22665f6960c5e0c86c5ca48667aafd5e93a27bfb89615fc5fcc150a/merged/etc/group: no such file or directory"
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.381326955Z" level=info msg="Created container 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c: kube-system/coredns-5dd5756b68-2gn29/coredns" id=e46f2c02-73b9-4f0a-b8e0-4162c28e1512 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.381961962Z" level=info msg="Starting container: 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c" id=22c48042-f4b7-4078-83c4-4b1b6ad3a966 name=/runtime.v1.RuntimeService/StartContainer
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.390984441Z" level=info msg="Started container" PID=4167 containerID=504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c description=kube-system/coredns-5dd5756b68-2gn29/coredns id=22c48042-f4b7-4078-83c4-4b1b6ad3a966 name=/runtime.v1.RuntimeService/StartContainer sandboxID=570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8
	Sep 11 11:43:44 pause-844693 crio[3183]: time="2023-09-11 11:43:44.621713749Z" level=info msg="Stopping container: 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c (timeout: 30s)" id=803dd1a7-888c-45a7-a0e7-3c8d9e28bbcb name=/runtime.v1.RuntimeService/StopContainer
	Sep 11 11:43:49 pause-844693 crio[3183]: time="2023-09-11 11:43:49.763305644Z" level=info msg="Stopped container 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c: kube-system/coredns-5dd5756b68-2gn29/coredns" id=803dd1a7-888c-45a7-a0e7-3c8d9e28bbcb name=/runtime.v1.RuntimeService/StopContainer
	Sep 11 11:43:49 pause-844693 crio[3183]: time="2023-09-11 11:43:49.764099987Z" level=info msg="Stopping pod sandbox: 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8" id=f273d90a-a6ec-430a-be92-a04dcac50041 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 11 11:43:49 pause-844693 crio[3183]: time="2023-09-11 11:43:49.764303428Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-2gn29 Namespace:kube-system ID:570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8 UID:ade2d2da-baae-423c-8c9a-6294d0d22277 NetNS:/var/run/netns/c6ee443a-3a95-466e-85b3-bcf07e94227a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 11 11:43:49 pause-844693 crio[3183]: time="2023-09-11 11:43:49.764478933Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-2gn29 from CNI network \"kindnet\" (type=ptp)"
	Sep 11 11:43:49 pause-844693 crio[3183]: time="2023-09-11 11:43:49.803801256Z" level=info msg="Stopped pod sandbox: 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8" id=f273d90a-a6ec-430a-be92-a04dcac50041 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	504dd4136806c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   5 seconds ago       Exited              coredns                   2                   570ed816a3ca6       coredns-5dd5756b68-2gn29
	b2fbe23930c38       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   8 seconds ago       Running             coredns                   2                   f1bbe20f37ff2       coredns-5dd5756b68-zvh8m
	ac4f8827ccd76       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   25 seconds ago      Running             kube-apiserver            2                   f886ff95e63b0       kube-apiserver-pause-844693
	835bc7b9b230e       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   25 seconds ago      Running             kube-controller-manager   2                   caf398077a4f1       kube-controller-manager-pause-844693
	8a7deea25aedf       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   25 seconds ago      Running             kube-scheduler            2                   3143e4acee751       kube-scheduler-pause-844693
	5068566eb8b8e       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   25 seconds ago      Running             kindnet-cni               2                   4214cedbf1d53       kindnet-7tct8
	bb8630f2c0c73       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   25 seconds ago      Running             kube-proxy                2                   6b8bcfa7f2e07       kube-proxy-gfzb6
	d7048e0b4e834       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   8801f7e114cbe       etcd-pause-844693
	cdf9aa78109f1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   38 seconds ago      Exited              etcd                      1                   8801f7e114cbe       etcd-pause-844693
	fdb91a124a6a5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   38 seconds ago      Exited              kube-apiserver            1                   f886ff95e63b0       kube-apiserver-pause-844693
	aa9227286c989       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   38 seconds ago      Exited              kube-controller-manager   1                   caf398077a4f1       kube-controller-manager-pause-844693
	76d35a166fd5d       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   38 seconds ago      Exited              kube-scheduler            1                   3143e4acee751       kube-scheduler-pause-844693
	9a62d90cca609       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   38 seconds ago      Exited              coredns                   1                   f1bbe20f37ff2       coredns-5dd5756b68-zvh8m
	0885e2fcf44f1       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   38 seconds ago      Exited              kube-proxy                1                   6b8bcfa7f2e07       kube-proxy-gfzb6
	a441792974757       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   38 seconds ago      Exited              coredns                   1                   570ed816a3ca6       coredns-5dd5756b68-2gn29
	43b750852cf7c       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   38 seconds ago      Exited              kindnet-cni               1                   4214cedbf1d53       kindnet-7tct8
	
	* 
	* ==> coredns [504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52149 - 64496 "HINFO IN 5024998056764530279.7459128487944673813. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019545154s
	
	* 
	* ==> coredns [9a62d90cca609fcd0f7c1dfecfc6253779227bfcd3f89c5bc37f5abfab2e993c] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39801 - 34182 "HINFO IN 2913630947870021945.1069754061682992805. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021065668s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53885 - 27010 "HINFO IN 5498766548903002151.6473315833692776976. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031123367s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b2fbe23930c38fb42af9a143f14a02de2db053df7685bb7e2940a1a1be96c9c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42195 - 23296 "HINFO IN 2791760603007898383.7344023142748220875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020483865s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-844693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-844693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=pause-844693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_42_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:42:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-844693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:43:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:42:59 +0000   Mon, 11 Sep 2023 11:42:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-844693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd36efc168a545e19e5a580c2e506316
	  System UUID:                8ce237c8-20ba-4507-af0e-40571ac4a272
	  Boot ID:                    0e6f3313-afe9-4b8d-8d49-46470123e935
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zvh8m                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     53s
	  kube-system                 etcd-pause-844693                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-7tct8                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      52s
	  kube-system                 kube-apiserver-pause-844693             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-pause-844693    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-gfzb6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-pause-844693             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 72s)  kubelet          Node pause-844693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 72s)  kubelet          Node pause-844693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x8 over 72s)  kubelet          Node pause-844693 status is now: NodeHasSufficientPID
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s                kubelet          Node pause-844693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s                kubelet          Node pause-844693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s                kubelet          Node pause-844693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node pause-844693 event: Registered Node pause-844693 in Controller
	  Normal  NodeReady                51s                kubelet          Node pause-844693 status is now: NodeReady
	  Normal  RegisteredNode           10s                node-controller  Node pause-844693 event: Registered Node pause-844693 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.255658] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +8.191293] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:32] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000008] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +1.001369] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000006] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +2.015800] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000021] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:33] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000025] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[  +8.195301] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-40f62e59100c
	[  +0.000005] ll header: 00000000: 02 42 ee 21 f8 bd 02 42 c0 a8 3a 02 08 00
	[Sep11 11:36] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000009] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +1.011311] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000025] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +2.019772] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +4.187654] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[  +8.191342] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2368f9e917b6
	[  +0.000006] ll header: 00000000: 02 42 26 1c bc 41 02 42 c0 a8 43 02 08 00
	[Sep11 11:40] process 'docker/tmp/qemu-check437207382/check' started with executable stack
	
	* 
	* ==> etcd [cdf9aa78109f17bfdb382122a5728c8159ea39b39801dbd64eb80d2483cc2cab] <==
	* {"level":"info","ts":"2023-09-11T11:43:13.890145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-09-11T11:43:13.890249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.890291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:13.891743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:13.891733Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-844693 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:43:13.891754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:13.893183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-11T11:43:13.892574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:13.893276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:13.893764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T11:43:14.319019Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-11T11:43:14.319178Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-844693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2023-09-11T11:43:14.319277Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.319354Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.319504Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38658","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:38658: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.367237Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T11:43:14.3673Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-11T11:43:14.367352Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-09-11T11:43:14.370155Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:14.370261Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:14.370299Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-844693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [d7048e0b4e8348142a7d3d7b1571b7df79b4b35a53c9f8793e6235036b8c14e7] <==
	* {"level":"info","ts":"2023-09-11T11:43:24.88992Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:43:24.889958Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:43:24.890014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-09-11T11:43:24.89021Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-09-11T11:43:24.890378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:43:24.890424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:43:24.89326Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:43:24.893344Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:24.893362Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-11T11:43:24.893636Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:43:24.893728Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:43:26.768191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-11T11:43:26.768293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.768313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-11T11:43:26.769756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:26.769759Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-844693 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:43:26.769783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:43:26.77006Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:26.770083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:43:26.771024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-11T11:43:26.771133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:43:50 up  1:26,  0 users,  load average: 6.58, 4.16, 2.49
	Linux pause-844693 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [43b750852cf7cf1ba60fa8e429fff93606a5b2db68b62a2e96080df44d120808] <==
	* I0911 11:43:11.667937       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0911 11:43:11.668130       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0911 11:43:11.668347       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:43:11.668393       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:43:11.668435       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:43:12.059009       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0911 11:43:12.059259       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kindnet [5068566eb8b8e9b7882e28cde4266b0c0493bf561be465a46bd9e8934d040a26] <==
	* I0911 11:43:25.063184       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0911 11:43:25.063247       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0911 11:43:25.063436       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:43:25.063455       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:43:25.063483       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:43:28.165637       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0911 11:43:28.167422       1 main.go:227] handling current node
	I0911 11:43:38.181733       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0911 11:43:38.181758       1 main.go:227] handling current node
	I0911 11:43:48.194334       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0911 11:43:48.194367       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ac4f8827ccd7654f7332de2fa03fe664c40df7b333f8d5c3f10073848d4af152] <==
	* I0911 11:43:27.935381       1 controller.go:85] Starting OpenAPI V3 controller
	I0911 11:43:27.936156       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0911 11:43:27.937115       1 aggregator.go:164] waiting for initial CRD sync...
	I0911 11:43:27.937129       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0911 11:43:27.937134       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0911 11:43:27.937179       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:43:27.937273       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:43:28.062521       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:43:28.063426       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:43:28.068586       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 11:43:28.072433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:43:28.074333       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:43:28.074416       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:43:28.074450       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:43:28.074499       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:43:28.074531       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:43:28.158715       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:43:28.158726       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:43:28.161370       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:43:28.158827       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0911 11:43:28.164632       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 11:43:28.940065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:43:40.896398       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:43:40.951410       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0911 11:43:40.995339       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [fdb91a124a6a570b2436748b4ba6a86b898e9d6a13a3930db525639b7ccf74fd] <==
	*   "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 11:43:14.325104       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 11:43:14.325345       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W0911 11:43:14.358394       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: rpc error: code = Internal desc = server closed the stream without sending trailers
	E0911 11:43:14.358564       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: rpc error: code = Internal desc = server closed the stream without sending trailers; reinitializing...
	
	* 
	* ==> kube-controller-manager [835bc7b9b230ee62639349a0da59602136db9aee6c3f4f8b1dd733343e69f213] <==
	* I0911 11:43:40.719741       1 shared_informer.go:318] Caches are synced for endpoint
	I0911 11:43:40.731009       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 11:43:40.801732       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:43:40.807924       1 shared_informer.go:318] Caches are synced for taint
	I0911 11:43:40.808033       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0911 11:43:40.808082       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0911 11:43:40.808143       1 taint_manager.go:211] "Sending events to api server"
	I0911 11:43:40.808159       1 event.go:307] "Event occurred" object="pause-844693" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-844693 event: Registered Node pause-844693 in Controller"
	I0911 11:43:40.808220       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-844693"
	I0911 11:43:40.808313       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0911 11:43:40.896655       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:43:40.955177       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0911 11:43:40.960348       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-2gn29"
	I0911 11:43:40.967566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.489019ms"
	I0911 11:43:40.978264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.628899ms"
	I0911 11:43:40.978415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.374µs"
	I0911 11:43:41.210486       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:43:41.243686       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:43:41.243723       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0911 11:43:41.625604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.974µs"
	I0911 11:43:41.642943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.978649ms"
	I0911 11:43:41.643055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.689µs"
	I0911 11:43:44.636878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.944µs"
	I0911 11:43:44.647024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.659µs"
	I0911 11:43:49.820604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.981µs"
	
	* 
	* ==> kube-controller-manager [aa9227286c98956417f65ee195d8cc9c096f779ac33dd93e51ec1f63e9c64727] <==
	* I0911 11:43:13.305810       1 serving.go:348] Generated self-signed cert in-memory
	I0911 11:43:14.111231       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0911 11:43:14.111265       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:14.112500       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:43:14.112628       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:43:14.113329       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0911 11:43:14.113373       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [0885e2fcf44f13ce18fb0b2e5369f657935199c74ef3bb6c3f7d944dd92c903f] <==
	* I0911 11:43:11.893874       1 server_others.go:69] "Using iptables proxy"
	E0911 11:43:11.896303       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-844693": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [bb8630f2c0c739856ec1d9f5ae6e2cb86e6529c519f3a9f7a41a0c884b6df3f7] <==
	* I0911 11:43:24.786572       1 server_others.go:69] "Using iptables proxy"
	E0911 11:43:24.791732       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-844693": dial tcp 192.168.76.2:8443: connect: connection refused
	I0911 11:43:28.166246       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0911 11:43:28.269163       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0911 11:43:28.271458       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:43:28.271498       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0911 11:43:28.271507       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0911 11:43:28.271548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:43:28.271848       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:43:28.271909       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:28.272601       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:43:28.272664       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:43:28.272623       1 config.go:188] "Starting service config controller"
	I0911 11:43:28.273260       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:43:28.272644       1 config.go:315] "Starting node config controller"
	I0911 11:43:28.273286       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:43:28.373065       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:43:28.373447       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:43:28.373468       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [76d35a166fd5d8b00d62567d0e510be9f811d2a2733ee48dbe533273800db765] <==
	* I0911 11:43:13.179735       1 serving.go:348] Generated self-signed cert in-memory
	
	* 
	* ==> kube-scheduler [8a7deea25aedf792cb3feb59e6880809860c455ca2386b933bc5322f4e9d34b6] <==
	* I0911 11:43:25.709816       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:43:28.061004       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:43:28.061049       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0911 11:43:28.061063       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:43:28.061072       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:43:28.161097       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 11:43:28.161136       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:43:28.164377       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:43:28.164489       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:43:28.165375       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:43:28.165463       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:43:28.265973       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.388701    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0abd77e79d5280922a8508cf5962ff9743b3a4068d16612655fce0ce37af6732/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0abd77e79d5280922a8508cf5962ff9743b3a4068d16612655fce0ce37af6732/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-844693_ef0c57dfba35c15e5cae89b29f3aaa26/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-844693_ef0c57dfba35c15e5cae89b29f3aaa26/kube-controller-manager/0.log: no such file or directory
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.389788    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4fa0d889fc399250d288af7158a4353953d563b9a76a1d3b83cc61bb34c3bb6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4fa0d889fc399250d288af7158a4353953d563b9a76a1d3b83cc61bb34c3bb6/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-844693_dd016f978e4d2527ba2db43aba9496e8/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-844693_dd016f978e4d2527ba2db43aba9496e8/kube-apiserver/0.log: no such file or directory
	Sep 11 11:43:45 pause-844693 kubelet[1591]: E0911 11:43:45.399452    1591 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3dec642af2ced99c04002c8a24376b298332cf4be21fe3915b59aae464d8d7bc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3dec642af2ced99c04002c8a24376b298332cf4be21fe3915b59aae464d8d7bc/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-844693_8cc32ea88c75cf7fa9232edbcac5cac2/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-844693_8cc32ea88c75cf7fa9232edbcac5cac2/kube-scheduler/0.log: no such file or directory
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.658240    1591 manager.go:1106] Failed to create existing container: /crio-f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed: Error finding container f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed: Status 404 returned error can't find the container with id f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.664453    1591 manager.go:1106] Failed to create existing container: /docker/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/crio-570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8: Error finding container 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8: Status 404 returned error can't find the container with id 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.666212    1591 manager.go:1106] Failed to create existing container: /crio-4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b: Error finding container 4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b: Status 404 returned error can't find the container with id 4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.673527    1591 manager.go:1106] Failed to create existing container: /docker/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/crio-f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed: Error finding container f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed: Status 404 returned error can't find the container with id f1bbe20f37ff2bb977c6512344f792aa53f8cc5cb222f22515286e8e2bbdd5ed
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.685124    1591 manager.go:1106] Failed to create existing container: /docker/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/crio-4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b: Error finding container 4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b: Status 404 returned error can't find the container with id 4214cedbf1d5386bb69170ad0300483685b6c1e65d3f50a1bb85b4548b42003b
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.685372    1591 manager.go:1106] Failed to create existing container: /crio-570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8: Error finding container 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8: Status 404 returned error can't find the container with id 570ed816a3ca60dab3c956bd61a340fd015bc09e44a20cea1ed4ecc4de668ba8
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.693708    1591 manager.go:1106] Failed to create existing container: /crio-6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b: Error finding container 6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b: Status 404 returned error can't find the container with id 6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b
	Sep 11 11:43:49 pause-844693 kubelet[1591]: E0911 11:43:49.693936    1591 manager.go:1106] Failed to create existing container: /docker/19301acdf740a765cf8a948df53f89f0f5412ec12611a08738ed7ddc321e519f/crio-6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b: Error finding container 6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b: Status 404 returned error can't find the container with id 6b8bcfa7f2e078648fdf6998d2fb2b5d5621fb86c044cfd5f5cab28d3c24988b
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.896527    1591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ade2d2da-baae-423c-8c9a-6294d0d22277-config-volume\") pod \"ade2d2da-baae-423c-8c9a-6294d0d22277\" (UID: \"ade2d2da-baae-423c-8c9a-6294d0d22277\") "
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.896574    1591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lngdp\" (UniqueName: \"kubernetes.io/projected/ade2d2da-baae-423c-8c9a-6294d0d22277-kube-api-access-lngdp\") pod \"ade2d2da-baae-423c-8c9a-6294d0d22277\" (UID: \"ade2d2da-baae-423c-8c9a-6294d0d22277\") "
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.896933    1591 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade2d2da-baae-423c-8c9a-6294d0d22277-config-volume" (OuterVolumeSpecName: "config-volume") pod "ade2d2da-baae-423c-8c9a-6294d0d22277" (UID: "ade2d2da-baae-423c-8c9a-6294d0d22277"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.898686    1591 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade2d2da-baae-423c-8c9a-6294d0d22277-kube-api-access-lngdp" (OuterVolumeSpecName: "kube-api-access-lngdp") pod "ade2d2da-baae-423c-8c9a-6294d0d22277" (UID: "ade2d2da-baae-423c-8c9a-6294d0d22277"). InnerVolumeSpecName "kube-api-access-lngdp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.997072    1591 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ade2d2da-baae-423c-8c9a-6294d0d22277-config-volume\") on node \"pause-844693\" DevicePath \"\""
	Sep 11 11:43:49 pause-844693 kubelet[1591]: I0911 11:43:49.997130    1591 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lngdp\" (UniqueName: \"kubernetes.io/projected/ade2d2da-baae-423c-8c9a-6294d0d22277-kube-api-access-lngdp\") on node \"pause-844693\" DevicePath \"\""
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.635096    1591 scope.go:117] "RemoveContainer" containerID="504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.652136    1591 scope.go:117] "RemoveContainer" containerID="a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.669395    1591 scope.go:117] "RemoveContainer" containerID="504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: E0911 11:43:50.669849    1591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c\": container with ID starting with 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c not found: ID does not exist" containerID="504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.669925    1591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c"} err="failed to get container status \"504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c\": rpc error: code = NotFound desc = could not find container \"504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c\": container with ID starting with 504dd4136806cf8fc1073d17eaf742bf69b22f4f925910e04efdaa11460d007c not found: ID does not exist"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.669940    1591 scope.go:117] "RemoveContainer" containerID="a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: E0911 11:43:50.670929    1591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680\": container with ID starting with a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680 not found: ID does not exist" containerID="a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"
	Sep 11 11:43:50 pause-844693 kubelet[1591]: I0911 11:43:50.670985    1591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680"} err="failed to get container status \"a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680\": rpc error: code = NotFound desc = could not find container \"a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680\": container with ID starting with a441792974757ee7a4534b129dde0ff64172775499e60591ed65e0082d1c2680 not found: ID does not exist"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:43:49.316315  343027 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17223-136166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-844693 -n pause-844693
helpers_test.go:261: (dbg) Run:  kubectl --context pause-844693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (47.21s)

                                                
                                    

Test pass (268/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.82
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.1/json-events 4.41
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.2
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
18 TestDownloadOnlyKic 1.24
19 TestBinaryMirror 0.71
20 TestOffline 83.88
22 TestAddons/Setup 124.15
24 TestAddons/parallel/Registry 15.08
26 TestAddons/parallel/InspektorGadget 10.74
27 TestAddons/parallel/MetricsServer 5.66
28 TestAddons/parallel/HelmTiller 11.75
30 TestAddons/parallel/CSI 83.8
31 TestAddons/parallel/Headlamp 13.18
32 TestAddons/parallel/CloudSpanner 5.8
35 TestAddons/serial/GCPAuth/Namespaces 0.13
36 TestAddons/StoppedEnableDisable 12.12
37 TestCertOptions 28.76
38 TestCertExpiration 237.37
40 TestForceSystemdFlag 25.22
41 TestForceSystemdEnv 38.43
43 TestKVMDriverInstallOrUpdate 4.05
47 TestErrorSpam/setup 21.21
48 TestErrorSpam/start 0.59
49 TestErrorSpam/status 0.85
50 TestErrorSpam/pause 1.45
51 TestErrorSpam/unpause 1.48
52 TestErrorSpam/stop 1.35
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 36.51
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 28.15
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
64 TestFunctional/serial/CacheCmd/cache/add_local 1.69
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 31.94
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.33
75 TestFunctional/serial/LogsFileCmd 1.34
76 TestFunctional/serial/InvalidService 3.98
78 TestFunctional/parallel/ConfigCmd 0.37
79 TestFunctional/parallel/DashboardCmd 11.11
80 TestFunctional/parallel/DryRun 0.4
81 TestFunctional/parallel/InternationalLanguage 0.17
82 TestFunctional/parallel/StatusCmd 0.91
86 TestFunctional/parallel/ServiceCmdConnect 9.69
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 34.05
90 TestFunctional/parallel/SSHCmd 0.56
91 TestFunctional/parallel/CpCmd 1.5
92 TestFunctional/parallel/MySQL 25.24
93 TestFunctional/parallel/FileSync 0.32
94 TestFunctional/parallel/CertSync 1.48
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
102 TestFunctional/parallel/License 0.23
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 0.53
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
109 TestFunctional/parallel/ImageCommands/ImageBuild 2.44
110 TestFunctional/parallel/ImageCommands/Setup 1.37
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.89
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 9.65
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.38
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.1
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.02
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
128 TestFunctional/parallel/ProfileCmd/profile_list 0.34
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
130 TestFunctional/parallel/MountCmd/any-port 6.73
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ServiceCmd/List 0.94
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
139 TestFunctional/parallel/MountCmd/specific-port 2.04
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
141 TestFunctional/parallel/ServiceCmd/Format 0.55
142 TestFunctional/parallel/ServiceCmd/URL 0.64
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
144 TestFunctional/delete_addon-resizer_images 0.08
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 66.39
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.77
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
157 TestJSONOutput/start/Command 65.47
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.67
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.59
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.77
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
182 TestKicCustomNetwork/create_custom_network 32.1
183 TestKicCustomNetwork/use_default_bridge_network 26.51
184 TestKicExistingNetwork 27.39
185 TestKicCustomSubnet 27.47
186 TestKicStaticIP 26.95
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 52.73
191 TestMountStart/serial/StartWithMountFirst 5.28
192 TestMountStart/serial/VerifyMountFirst 0.24
193 TestMountStart/serial/StartWithMountSecond 5.25
194 TestMountStart/serial/VerifyMountSecond 0.24
195 TestMountStart/serial/DeleteFirst 1.61
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.2
198 TestMountStart/serial/RestartStopped 7.06
199 TestMountStart/serial/VerifyMountPostStop 0.24
202 TestMultiNode/serial/FreshStart2Nodes 114.44
203 TestMultiNode/serial/DeployApp2Nodes 4.26
205 TestMultiNode/serial/AddNode 21.19
206 TestMultiNode/serial/ProfileList 0.26
207 TestMultiNode/serial/CopyFile 8.76
208 TestMultiNode/serial/StopNode 2.1
209 TestMultiNode/serial/StartAfterStop 10.89
210 TestMultiNode/serial/RestartKeepsNodes 111.6
211 TestMultiNode/serial/DeleteNode 4.64
212 TestMultiNode/serial/StopMultiNode 23.78
213 TestMultiNode/serial/RestartMultiNode 79.22
214 TestMultiNode/serial/ValidateNameConflict 26.99
219 TestPreload 150.17
221 TestScheduledStopUnix 96.41
224 TestInsufficientStorage 12.99
227 TestKubernetesUpgrade 357.54
228 TestMissingContainerUpgrade 144.44
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
234 TestNoKubernetes/serial/StartWithK8s 35.45
239 TestNetworkPlugins/group/false 8.93
243 TestNoKubernetes/serial/StartWithStopK8s 18.76
244 TestNoKubernetes/serial/Start 4.68
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
246 TestNoKubernetes/serial/ProfileList 1.7
247 TestNoKubernetes/serial/Stop 1.24
248 TestNoKubernetes/serial/StartNoArgs 7.96
249 TestStoppedBinaryUpgrade/Setup 0.48
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
252 TestStoppedBinaryUpgrade/MinikubeLogs 0.52
261 TestPause/serial/Start 44.08
262 TestNetworkPlugins/group/auto/Start 70.56
263 TestNetworkPlugins/group/kindnet/Start 72.48
265 TestNetworkPlugins/group/calico/Start 63.31
266 TestNetworkPlugins/group/auto/KubeletFlags 0.31
267 TestNetworkPlugins/group/auto/NetCatPod 9.35
268 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
269 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
270 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
271 TestNetworkPlugins/group/auto/DNS 0.19
272 TestNetworkPlugins/group/auto/Localhost 0.16
273 TestNetworkPlugins/group/auto/HairPin 0.14
274 TestNetworkPlugins/group/kindnet/DNS 0.18
275 TestNetworkPlugins/group/kindnet/Localhost 0.2
276 TestNetworkPlugins/group/kindnet/HairPin 0.21
277 TestNetworkPlugins/group/custom-flannel/Start 63.55
278 TestNetworkPlugins/group/enable-default-cni/Start 77.17
279 TestNetworkPlugins/group/calico/ControllerPod 5.03
280 TestNetworkPlugins/group/calico/KubeletFlags 0.26
281 TestNetworkPlugins/group/calico/NetCatPod 10.36
282 TestNetworkPlugins/group/calico/DNS 0.19
283 TestNetworkPlugins/group/calico/Localhost 0.18
284 TestNetworkPlugins/group/calico/HairPin 0.17
285 TestNetworkPlugins/group/flannel/Start 58.63
286 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
287 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
288 TestNetworkPlugins/group/custom-flannel/DNS 0.2
289 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
290 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
291 TestNetworkPlugins/group/bridge/Start 38.02
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
295 TestStartStop/group/old-k8s-version/serial/FirstStart 126.25
296 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
297 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
298 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
299 TestNetworkPlugins/group/flannel/ControllerPod 5.03
300 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
301 TestNetworkPlugins/group/flannel/NetCatPod 13
303 TestStartStop/group/no-preload/serial/FirstStart 68.28
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
305 TestNetworkPlugins/group/bridge/NetCatPod 9.42
306 TestNetworkPlugins/group/flannel/DNS 0.2
307 TestNetworkPlugins/group/flannel/Localhost 0.15
308 TestNetworkPlugins/group/flannel/HairPin 0.16
309 TestNetworkPlugins/group/bridge/DNS 33.79
311 TestStartStop/group/embed-certs/serial/FirstStart 41.83
312 TestNetworkPlugins/group/bridge/Localhost 0.16
313 TestNetworkPlugins/group/bridge/HairPin 0.14
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.95
316 TestStartStop/group/no-preload/serial/DeployApp 10.38
317 TestStartStop/group/embed-certs/serial/DeployApp 8.32
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
319 TestStartStop/group/no-preload/serial/Stop 12.01
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
321 TestStartStop/group/embed-certs/serial/Stop 12.06
322 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
323 TestStartStop/group/no-preload/serial/SecondStart 335.05
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
325 TestStartStop/group/embed-certs/serial/SecondStart 338.98
326 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
328 TestStartStop/group/old-k8s-version/serial/Stop 12.02
329 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
330 TestStartStop/group/old-k8s-version/serial/SecondStart 398.61
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 340.03
336 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.02
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.07
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
340 TestStartStop/group/no-preload/serial/Pause 2.67
341 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
343 TestStartStop/group/newest-cni/serial/FirstStart 38.15
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
345 TestStartStop/group/embed-certs/serial/Pause 3.31
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
348 TestStartStop/group/newest-cni/serial/Stop 2.24
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
350 TestStartStop/group/newest-cni/serial/SecondStart 26.28
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
358 TestStartStop/group/newest-cni/serial/Pause 2.5
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
362 TestStartStop/group/old-k8s-version/serial/Pause 2.58
x
+
TestDownloadOnly/v1.16.0/json-events (4.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-804318 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-804318 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.821080654s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-804318
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-804318: exit status 85 (59.38038ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-804318 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |          |
	|         | -p download-only-804318        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:09:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:09:17.026126  143429 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:17.026289  143429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:17.026298  143429 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:17.026305  143429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:17.026523  143429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	W0911 11:09:17.026671  143429 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17223-136166/.minikube/config/config.json: open /home/jenkins/minikube-integration/17223-136166/.minikube/config/config.json: no such file or directory
	I0911 11:09:17.027287  143429 out.go:303] Setting JSON to true
	I0911 11:09:17.028554  143429 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3105,"bootTime":1694427452,"procs":695,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:17.028615  143429 start.go:138] virtualization: kvm guest
	I0911 11:09:17.031269  143429 out.go:97] [download-only-804318] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:17.033231  143429 out.go:169] MINIKUBE_LOCATION=17223
	W0911 11:09:17.031384  143429 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17223-136166/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 11:09:17.031433  143429 notify.go:220] Checking for updates...
	I0911 11:09:17.036794  143429 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:17.038731  143429 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:09:17.040514  143429 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:09:17.042145  143429 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0911 11:09:17.045195  143429 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 11:09:17.045392  143429 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:09:17.068103  143429 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:09:17.068222  143429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:09:17.125333  143429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-09-11 11:09:17.116874199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:09:17.125428  143429 docker.go:294] overlay module found
	I0911 11:09:17.127475  143429 out.go:97] Using the docker driver based on user configuration
	I0911 11:09:17.127502  143429 start.go:298] selected driver: docker
	I0911 11:09:17.127509  143429 start.go:902] validating driver "docker" against <nil>
	I0911 11:09:17.127594  143429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:09:17.180539  143429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-09-11 11:09:17.171845821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:09:17.180719  143429 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:09:17.181146  143429 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0911 11:09:17.181271  143429 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 11:09:17.183385  143429 out.go:169] Using Docker driver with root privileges
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-804318"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (4.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-804318 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-804318 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.405945765s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (4.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-804318
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-804318: exit status 85 (59.504132ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-804318 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |          |
	|         | -p download-only-804318        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-804318 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |          |
	|         | -p download-only-804318        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:09:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:09:21.910118  143571 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:21.910273  143571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:21.910286  143571 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:21.910294  143571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:21.910488  143571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	W0911 11:09:21.910612  143571 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17223-136166/.minikube/config/config.json: open /home/jenkins/minikube-integration/17223-136166/.minikube/config/config.json: no such file or directory
	I0911 11:09:21.911064  143571 out.go:303] Setting JSON to true
	I0911 11:09:21.912357  143571 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3110,"bootTime":1694427452,"procs":695,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:21.912421  143571 start.go:138] virtualization: kvm guest
	I0911 11:09:21.914869  143571 out.go:97] [download-only-804318] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:21.916641  143571 out.go:169] MINIKUBE_LOCATION=17223
	I0911 11:09:21.915089  143571 notify.go:220] Checking for updates...
	I0911 11:09:21.919870  143571 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:21.921395  143571 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:09:21.923236  143571 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:09:21.924816  143571 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-804318"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-804318
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.24s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-028771 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-028771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-028771
--- PASS: TestDownloadOnlyKic (1.24s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-929527 --alsologtostderr --binary-mirror http://127.0.0.1:37375 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-929527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-929527
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (83.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-341798 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-341798 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m19.956471222s)
helpers_test.go:175: Cleaning up "offline-crio-341798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-341798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-341798: (3.92076342s)
--- PASS: TestOffline (83.88s)

                                                
                                    
x
+
TestAddons/Setup (124.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-387581 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-387581 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m4.147123109s)
--- PASS: TestAddons/Setup (124.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 15.989589ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lsf9c" [229a155a-01b6-4c49-9097-38bc0f421cc7] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.054759105s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n62vc" [81a8d184-03d9-4971-b616-c8e87daf001f] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012717113s
addons_test.go:316: (dbg) Run:  kubectl --context addons-387581 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-387581 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-387581 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.82856882s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 ip
2023/09/11 11:11:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-45rqh" [e2e0ffb6-35cf-4906-ba27-7f6f2d580af0] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01184021s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-387581
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-387581: (5.724928749s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 2.879575ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-hh99k" [438d5c8d-3fb1-4282-aea2-d898eb14cde8] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011198924s
addons_test.go:391: (dbg) Run:  kubectl --context addons-387581 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.60486ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-sh92l" [4c9748e0-81a5-477c-a57a-a5e7eb91d2f5] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013589217s
addons_test.go:449: (dbg) Run:  kubectl --context addons-387581 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-387581 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.245306686s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (83.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 16.01075ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-387581 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-387581 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [be289e0e-9193-4dff-a0e3-f5a6f4b4677b] Pending
helpers_test.go:344: "task-pv-pod" [be289e0e-9193-4dff-a0e3-f5a6f4b4677b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [be289e0e-9193-4dff-a0e3-f5a6f4b4677b] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.009676856s
addons_test.go:560: (dbg) Run:  kubectl --context addons-387581 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-387581 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-387581 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-387581 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-387581 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-387581 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-387581 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-387581 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-387581 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b80aa829-e28e-4444-ba0e-da5729b383ea] Pending
helpers_test.go:344: "task-pv-pod-restore" [b80aa829-e28e-4444-ba0e-da5729b383ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b80aa829-e28e-4444-ba0e-da5729b383ea] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008957033s
addons_test.go:602: (dbg) Run:  kubectl --context addons-387581 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-387581 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-387581 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-387581 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.578706013s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-387581 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (83.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-387581 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-387581 --alsologtostderr -v=1: (1.103583302s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-2qvrm" [47de2002-4625-4ab9-8436-ba40e7b31472] Pending
helpers_test.go:344: "headlamp-699c48fb74-2qvrm" [47de2002-4625-4ab9-8436-ba40e7b31472] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-2qvrm" [47de2002-4625-4ab9-8436-ba40e7b31472] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.077241235s
--- PASS: TestAddons/parallel/Headlamp (13.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-7ssfm" [d5f8515e-b1e0-4c30-8eb8-133364f59587] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.052482744s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-387581
--- PASS: TestAddons/parallel/CloudSpanner (5.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-387581 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-387581 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-387581
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-387581: (11.884943151s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-387581
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-387581
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-387581
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (28.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-645915 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0911 11:41:32.891408  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-645915 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.496699301s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-645915 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-645915 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-645915 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-645915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-645915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-645915: (2.643201831s)
--- PASS: TestCertOptions (28.76s)

                                                
                                    
x
+
TestCertExpiration (237.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-352590 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-352590 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.867426628s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-352590 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-352590 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.577331958s)
helpers_test.go:175: Cleaning up "cert-expiration-352590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-352590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-352590: (1.922556336s)
--- PASS: TestCertExpiration (237.37s)

                                                
                                    
x
+
TestForceSystemdFlag (25.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682524 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0911 11:39:30.135795  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682524 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.483946918s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682524 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-682524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-682524: (2.428595119s)
--- PASS: TestForceSystemdFlag (25.22s)

                                                
                                    
x
+
TestForceSystemdEnv (38.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-345094 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-345094 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.025412922s)
helpers_test.go:175: Cleaning up "force-systemd-env-345094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-345094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-345094: (2.405537238s)
--- PASS: TestForceSystemdEnv (38.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.05s)

                                                
                                    
x
+
TestErrorSpam/setup (21.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-865628 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-865628 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-865628 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-865628 --driver=docker  --container-runtime=crio: (21.206460587s)
--- PASS: TestErrorSpam/setup (21.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 stop: (1.179831281s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-865628 --log_dir /tmp/nospam-865628 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17223-136166/.minikube/files/etc/test/nested/copy/143417/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-224127 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.505183552s)
--- PASS: TestFunctional/serial/StartWithProxy (36.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --alsologtostderr -v=8
E0911 11:16:32.891053  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:32.896808  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:32.907033  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:32.927282  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:32.967620  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:33.047934  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:33.208404  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:33.528922  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:34.169783  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:16:35.450780  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-224127 --alsologtostderr -v=8: (28.149024088s)
functional_test.go:659: soft start took 28.149723543s for "functional-224127" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-224127 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache add registry.k8s.io/pause:3.3
E0911 11:16:38.011277  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-224127 /tmp/TestFunctionalserialCacheCmdcacheadd_local2583178987/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache add minikube-local-cache-test:functional-224127
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 cache add minikube-local-cache-test:functional-224127: (1.363976855s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache delete minikube-local-cache-test:functional-224127
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-224127
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (265.418181ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0911 11:16:43.131786  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 kubectl -- --context functional-224127 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-224127 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0911 11:16:53.372467  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:17:13.852771  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-224127 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.937344966s)
functional_test.go:757: restart took 31.937448063s for "functional-224127" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-224127 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 logs: (1.327844612s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 logs --file /tmp/TestFunctionalserialLogsFileCmd1219511324/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 logs --file /tmp/TestFunctionalserialLogsFileCmd1219511324/001/logs.txt: (1.339765154s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-224127 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-224127
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-224127: exit status 115 (317.77497ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32225 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-224127 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 config get cpus: exit status 14 (71.343943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 config get cpus: exit status 14 (52.779272ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-224127 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-224127 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 179518: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224127 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.391385ms)

                                                
                                                
-- stdout --
	* [functional-224127] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:17:56.463996  178528 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:17:56.464112  178528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:17:56.464120  178528 out.go:309] Setting ErrFile to fd 2...
	I0911 11:17:56.464125  178528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:17:56.464329  178528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:17:56.464939  178528 out.go:303] Setting JSON to false
	I0911 11:17:56.466330  178528 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3625,"bootTime":1694427452,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:17:56.466401  178528 start.go:138] virtualization: kvm guest
	I0911 11:17:56.468945  178528 out.go:177] * [functional-224127] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:17:56.470830  178528 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:17:56.470854  178528 notify.go:220] Checking for updates...
	I0911 11:17:56.473682  178528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:17:56.475295  178528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:17:56.476922  178528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:17:56.478506  178528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:17:56.480020  178528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:17:56.482787  178528 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:17:56.483322  178528 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:17:56.512040  178528 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:17:56.512123  178528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:17:56.589284  178528 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-09-11 11:17:56.577583097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:17:56.589411  178528 docker.go:294] overlay module found
	I0911 11:17:56.591563  178528 out.go:177] * Using the docker driver based on existing profile
	I0911 11:17:56.592915  178528 start.go:298] selected driver: docker
	I0911 11:17:56.592932  178528 start.go:902] validating driver "docker" against &{Name:functional-224127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-224127 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:17:56.593053  178528 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:17:56.595673  178528 out.go:177] 
	W0911 11:17:56.597143  178528 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0911 11:17:56.598637  178528 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224127 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224127 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.088444ms)

                                                
                                                
-- stdout --
	* [functional-224127] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:17:56.874156  178872 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:17:56.874302  178872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:17:56.874312  178872 out.go:309] Setting ErrFile to fd 2...
	I0911 11:17:56.874319  178872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:17:56.874720  178872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:17:56.875422  178872 out.go:303] Setting JSON to false
	I0911 11:17:56.876882  178872 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3625,"bootTime":1694427452,"procs":371,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:17:56.876955  178872 start.go:138] virtualization: kvm guest
	I0911 11:17:56.879297  178872 out.go:177] * [functional-224127] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0911 11:17:56.881088  178872 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:17:56.881125  178872 notify.go:220] Checking for updates...
	I0911 11:17:56.882720  178872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:17:56.884817  178872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:17:56.886371  178872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:17:56.887787  178872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:17:56.889255  178872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:17:56.891020  178872 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:17:56.891454  178872 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:17:56.916883  178872 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:17:56.916999  178872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:17:56.981488  178872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-09-11 11:17:56.972265598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:17:56.981589  178872 docker.go:294] overlay module found
	I0911 11:17:56.983440  178872 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0911 11:17:56.984959  178872 start.go:298] selected driver: docker
	I0911 11:17:56.984976  178872 start.go:902] validating driver "docker" against &{Name:functional-224127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-224127 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:17:56.985113  178872 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:17:56.987546  178872 out.go:177] 
	W0911 11:17:56.988935  178872 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0911 11:17:56.990386  178872 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-224127 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-224127 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-x47g9" [b3d16125-cc94-4a31-a9bb-ac9153ac8060] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-x47g9" [b3d16125-cc94-4a31-a9bb-ac9153ac8060] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018160635s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31443
functional_test.go:1674: http://192.168.49.2:31443: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-x47g9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31443
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [99dbd28b-c60e-4f0a-a351-eec14456111d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.047273886s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-224127 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-224127 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-224127 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-224127 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0f16fbfa-2e76-4ba8-b610-cb4c5e5c537b] Pending
helpers_test.go:344: "sp-pod" [0f16fbfa-2e76-4ba8-b610-cb4c5e5c537b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0f16fbfa-2e76-4ba8-b610-cb4c5e5c537b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.010119183s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-224127 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-224127 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-224127 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eec605d9-fb92-49aa-b0f4-b09435b84405] Pending
helpers_test.go:344: "sp-pod" [eec605d9-fb92-49aa-b0f4-b09435b84405] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eec605d9-fb92-49aa-b0f4-b09435b84405] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.012739431s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-224127 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh -n functional-224127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 cp functional-224127:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4258715895/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh -n functional-224127 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-224127 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-zmxnx" [7b5d84df-2c07-4954-8111-56b735476207] Pending
helpers_test.go:344: "mysql-859648c796-zmxnx" [7b5d84df-2c07-4954-8111-56b735476207] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-zmxnx" [7b5d84df-2c07-4954-8111-56b735476207] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.021466113s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;": exit status 1 (206.588671ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;": exit status 1 (144.140549ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;": exit status 1 (227.903383ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-224127 exec mysql-859648c796-zmxnx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/143417/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /etc/test/nested/copy/143417/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/143417.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /etc/ssl/certs/143417.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/143417.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /usr/share/ca-certificates/143417.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1434172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /etc/ssl/certs/1434172.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1434172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /usr/share/ca-certificates/1434172.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-224127 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "sudo systemctl is-active docker": exit status 1 (267.615295ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "sudo systemctl is-active containerd": exit status 1 (259.059698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224127 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-224127
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224127 image ls --format short --alsologtostderr:
I0911 11:17:59.679819  180486 out.go:296] Setting OutFile to fd 1 ...
I0911 11:17:59.680136  180486 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:17:59.680511  180486 out.go:309] Setting ErrFile to fd 2...
I0911 11:17:59.680559  180486 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:17:59.680803  180486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
I0911 11:17:59.681395  180486 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:17:59.681559  180486 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:17:59.681980  180486 cli_runner.go:164] Run: docker container inspect functional-224127 --format={{.State.Status}}
I0911 11:17:59.724943  180486 ssh_runner.go:195] Run: systemctl --version
I0911 11:17:59.724989  180486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224127
I0911 11:17:59.757330  180486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/functional-224127/id_rsa Username:docker}
I0911 11:17:59.850720  180486 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224127 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-224127  | ffd4cfbbe753e | 34.1MB |
| docker.io/library/nginx                 | alpine             | 433dbc17191a7 | 44.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | f5a6b296b8a29 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224127 image ls --format table --alsologtostderr:
I0911 11:18:00.379745  181026 out.go:296] Setting OutFile to fd 1 ...
I0911 11:18:00.379867  181026 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.379877  181026 out.go:309] Setting ErrFile to fd 2...
I0911 11:18:00.379884  181026 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.380087  181026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
I0911 11:18:00.380705  181026 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.380806  181026 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.381163  181026 cli_runner.go:164] Run: docker container inspect functional-224127 --format={{.State.Status}}
I0911 11:18:00.399540  181026 ssh_runner.go:195] Run: systemctl --version
I0911 11:18:00.399588  181026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224127
I0911 11:18:00.417224  181026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/functional-224127/id_rsa Username:docker}
I0911 11:18:00.506518  181026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224127 image ls --format json --alsologtostderr:
[{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b
6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820093"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/
google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-224127"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8
s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126972880"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba
36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44389673"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","
repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224127 image ls --format json --alsologtostderr:
I0911 11:18:00.165698  180913 out.go:296] Setting OutFile to fd 1 ...
I0911 11:18:00.165811  180913 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.165819  180913 out.go:309] Setting ErrFile to fd 2...
I0911 11:18:00.165823  180913 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.166010  180913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
I0911 11:18:00.166725  180913 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.166817  180913 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.167143  180913 cli_runner.go:164] Run: docker container inspect functional-224127 --format={{.State.Status}}
I0911 11:18:00.185596  180913 ssh_runner.go:195] Run: systemctl --version
I0911 11:18:00.185653  180913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224127
I0911 11:18:00.207036  180913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/functional-224127/id_rsa Username:docker}
I0911 11:18:00.298262  180913 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224127 image ls --format yaml --alsologtostderr:
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-224127
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a
repoTags:
- docker.io/library/nginx:latest
size: "190820093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d
repoTags:
- docker.io/library/nginx:alpine
size: "44389673"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224127 image ls --format yaml --alsologtostderr:
I0911 11:17:59.942931  180816 out.go:296] Setting OutFile to fd 1 ...
I0911 11:17:59.943083  180816 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:17:59.943091  180816 out.go:309] Setting ErrFile to fd 2...
I0911 11:17:59.943097  180816 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:17:59.943368  180816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
I0911 11:17:59.944009  180816 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:17:59.944145  180816 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:17:59.944557  180816 cli_runner.go:164] Run: docker container inspect functional-224127 --format={{.State.Status}}
I0911 11:17:59.964885  180816 ssh_runner.go:195] Run: systemctl --version
I0911 11:17:59.964933  180816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224127
I0911 11:17:59.981150  180816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/functional-224127/id_rsa Username:docker}
I0911 11:18:00.074780  180816 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh pgrep buildkitd: exit status 1 (248.898906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image build -t localhost/my-image:functional-224127 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 image build -t localhost/my-image:functional-224127 testdata/build --alsologtostderr: (1.985885015s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224127 image build -t localhost/my-image:functional-224127 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e3d1ade1ab8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-224127
--> 7f427507cf5
Successfully tagged localhost/my-image:functional-224127
7f427507cf5cf7308f0437a623c8925f9294aa3924bf99ea38e9707e40e8d2ac
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224127 image build -t localhost/my-image:functional-224127 testdata/build --alsologtostderr:
I0911 11:18:00.390637  181036 out.go:296] Setting OutFile to fd 1 ...
I0911 11:18:00.390810  181036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.390821  181036 out.go:309] Setting ErrFile to fd 2...
I0911 11:18:00.390826  181036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:18:00.391040  181036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
I0911 11:18:00.391646  181036 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.392239  181036 config.go:182] Loaded profile config "functional-224127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:18:00.392696  181036 cli_runner.go:164] Run: docker container inspect functional-224127 --format={{.State.Status}}
I0911 11:18:00.409735  181036 ssh_runner.go:195] Run: systemctl --version
I0911 11:18:00.409790  181036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224127
I0911 11:18:00.427793  181036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/functional-224127/id_rsa Username:docker}
I0911 11:18:00.514338  181036 build_images.go:151] Building image from path: /tmp/build.3731774572.tar
I0911 11:18:00.514397  181036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0911 11:18:00.522646  181036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3731774572.tar
I0911 11:18:00.525858  181036 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3731774572.tar: stat -c "%s %y" /var/lib/minikube/build/build.3731774572.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3731774572.tar': No such file or directory
I0911 11:18:00.525892  181036 ssh_runner.go:362] scp /tmp/build.3731774572.tar --> /var/lib/minikube/build/build.3731774572.tar (3072 bytes)
I0911 11:18:00.550617  181036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3731774572
I0911 11:18:00.558627  181036 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3731774572 -xf /var/lib/minikube/build/build.3731774572.tar
I0911 11:18:00.568225  181036 crio.go:297] Building image: /var/lib/minikube/build/build.3731774572
I0911 11:18:00.568297  181036 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-224127 /var/lib/minikube/build/build.3731774572 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0911 11:18:02.314690  181036 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-224127 /var/lib/minikube/build/build.3731774572 --cgroup-manager=cgroupfs: (1.746367743s)
I0911 11:18:02.314752  181036 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3731774572
I0911 11:18:02.322941  181036 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3731774572.tar
I0911 11:18:02.330650  181036 build_images.go:207] Built localhost/my-image:functional-224127 from /tmp/build.3731774572.tar
I0911 11:18:02.330682  181036 build_images.go:123] succeeded building to: functional-224127
I0911 11:18:02.330686  181036 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
2023/09/11 11:18:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.345701963s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-224127
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr: (3.689846047s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr: (9.441040522s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 174200: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-224127 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0578e999-fa1c-424a-8445-1ed92202f507] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0578e999-fa1c-424a-8445-1ed92202f507] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.074545841s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.207675089s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-224127
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 image load --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr: (6.67355577s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image save gcr.io/google-containers/addon-resizer:functional-224127 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image rm gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-224127 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.673341476s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-224127
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 image save --daemon gcr.io/google-containers/addon-resizer:functional-224127 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-224127
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-224127 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-224127 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6mzpz" [6f2f0e6f-497f-46d3-9df0-9c3f87d650f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6mzpz" [6f2f0e6f-497f-46d3-9df0-9c3f87d650f5] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.070757537s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "287.543502ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "54.362337ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "318.761932ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "50.938797ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdany-port216527403/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694431070772349904" to /tmp/TestFunctionalparallelMountCmdany-port216527403/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694431070772349904" to /tmp/TestFunctionalparallelMountCmdany-port216527403/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694431070772349904" to /tmp/TestFunctionalparallelMountCmdany-port216527403/001/test-1694431070772349904
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.489565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 11 11:17 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 11 11:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 11 11:17 test-1694431070772349904
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh cat /mount-9p/test-1694431070772349904
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-224127 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [27014573-249d-484f-9ac3-c93a050a286e] Pending
helpers_test.go:344: "busybox-mount" [27014573-249d-484f-9ac3-c93a050a286e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [27014573-249d-484f-9ac3-c93a050a286e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [27014573-249d-484f-9ac3-c93a050a286e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011414375s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-224127 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdany-port216527403/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-224127 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0911 11:17:54.813296  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.62.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-224127 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service list -o json
functional_test.go:1493: Took "919.9951ms" to run "out/minikube-linux-amd64 -p functional-224127 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdspecific-port229183403/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.858544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdspecific-port229183403/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "sudo umount -f /mount-9p": exit status 1 (274.776869ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-224127 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdspecific-port229183403/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30508
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30508
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T" /mount1: exit status 1 (356.692081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224127 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-224127 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224127 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999393801/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-224127
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-224127
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-224127
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (66.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-452365 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0911 11:19:16.735359  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-452365 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m6.391816149s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (66.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons enable ingress --alsologtostderr -v=5: (10.77184047s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-452365 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-765513 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0911 11:22:42.745476  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:23:03.225651  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-765513 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m5.466935365s)
--- PASS: TestJSONOutput/start/Command (65.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-765513 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-765513 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-765513 --output=json --user=testUser
E0911 11:23:44.186602  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-765513 --output=json --user=testUser: (5.771491951s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-725556 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-725556 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.812058ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aeaa262c-259a-4a1c-99af-d8874777e590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-725556] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f030fdb7-d8ef-4d95-a089-5c42aef85195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17223"}}
	{"specversion":"1.0","id":"dd46a9e5-a241-46b6-b430-e38f4d11faf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d3063176-346f-4734-a2e7-68a3e660061a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig"}}
	{"specversion":"1.0","id":"8bd9b6f0-da96-4495-84a7-5e86f727e7d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube"}}
	{"specversion":"1.0","id":"10e19084-c42a-40d2-b8b9-895be2156abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e47b2400-4088-4c3f-92bb-0e9f6b436d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ddb31202-2862-4114-8b10-401ac9f4cca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-725556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-725556
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-529535 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-529535 --network=: (30.084309862s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-529535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-529535
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-529535: (2.001560987s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-312054 --network=bridge
E0911 11:24:30.138458  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.143710  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.154055  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.174352  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.214706  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.295021  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.455441  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:30.775995  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:31.416706  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:32.697526  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:35.258204  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:24:40.378423  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-312054 --network=bridge: (24.620846955s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-312054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-312054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-312054: (1.872179658s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.51s)

                                                
                                    
x
+
TestKicExistingNetwork (27.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-316760 --network=existing-network
E0911 11:24:50.619020  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:25:06.107283  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:25:11.099814  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-316760 --network=existing-network: (25.395245078s)
helpers_test.go:175: Cleaning up "existing-network-316760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-316760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-316760: (1.866122697s)
--- PASS: TestKicExistingNetwork (27.39s)

                                                
                                    
x
+
TestKicCustomSubnet (27.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-562681 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-562681 --subnet=192.168.60.0/24: (25.502076799s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-562681 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-562681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-562681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-562681: (1.949049987s)
--- PASS: TestKicCustomSubnet (27.47s)

                                                
                                    
x
+
TestKicStaticIP (26.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-203401 --static-ip=192.168.200.200
E0911 11:25:52.059997  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-203401 --static-ip=192.168.200.200: (24.790809926s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-203401 ip
helpers_test.go:175: Cleaning up "static-ip-203401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-203401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-203401: (2.035113004s)
--- PASS: TestKicStaticIP (26.95s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-662810 --driver=docker  --container-runtime=crio
E0911 11:26:32.891171  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-662810 --driver=docker  --container-runtime=crio: (24.25699671s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-666194 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-666194 --driver=docker  --container-runtime=crio: (23.834528113s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-662810
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-666194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-666194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-666194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-666194: (1.842309252s)
helpers_test.go:175: Cleaning up "first-662810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-662810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-662810: (1.831131283s)
--- PASS: TestMinikubeProfile (52.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-040317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-040317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.276772421s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-040317 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-055789 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-055789 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.244680623s)
E0911 11:27:13.981103  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (5.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-055789 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-040317 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-040317 --alsologtostderr -v=5: (1.608949925s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-055789 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-055789
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-055789: (1.199700503s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-055789
E0911 11:27:22.263161  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-055789: (6.062970035s)
--- PASS: TestMountStart/serial/RestartStopped (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-055789 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517978 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0911 11:27:49.948466  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517978 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m54.005550161s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-517978 -- rollout status deployment/busybox: (2.594526786s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-l4r9c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517978 -- exec busybox-5bc68d56bd-qrkdr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.26s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-517978 -v 3 --alsologtostderr
E0911 11:29:30.135910  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-517978 -v 3 --alsologtostderr: (20.598143516s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp testdata/cp-test.txt multinode-517978:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4220387715/001/cp-test_multinode-517978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978:/home/docker/cp-test.txt multinode-517978-m02:/home/docker/cp-test_multinode-517978_multinode-517978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test_multinode-517978_multinode-517978-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978:/home/docker/cp-test.txt multinode-517978-m03:/home/docker/cp-test_multinode-517978_multinode-517978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test_multinode-517978_multinode-517978-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp testdata/cp-test.txt multinode-517978-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4220387715/001/cp-test_multinode-517978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m02:/home/docker/cp-test.txt multinode-517978:/home/docker/cp-test_multinode-517978-m02_multinode-517978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test_multinode-517978-m02_multinode-517978.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m02:/home/docker/cp-test.txt multinode-517978-m03:/home/docker/cp-test_multinode-517978-m02_multinode-517978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test_multinode-517978-m02_multinode-517978-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp testdata/cp-test.txt multinode-517978-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4220387715/001/cp-test_multinode-517978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m03:/home/docker/cp-test.txt multinode-517978:/home/docker/cp-test_multinode-517978-m03_multinode-517978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test.txt"
E0911 11:29:57.821316  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978 "sudo cat /home/docker/cp-test_multinode-517978-m03_multinode-517978.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 cp multinode-517978-m03:/home/docker/cp-test.txt multinode-517978-m02:/home/docker/cp-test_multinode-517978-m03_multinode-517978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 ssh -n multinode-517978-m02 "sudo cat /home/docker/cp-test_multinode-517978-m03_multinode-517978-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-517978 node stop m03: (1.188806149s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517978 status: exit status 7 (454.744812ms)

                                                
                                                
-- stdout --
	multinode-517978
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517978-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517978-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr: exit status 7 (451.74941ms)

                                                
                                                
-- stdout --
	multinode-517978
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517978-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517978-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:30:00.741423  240676 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:30:00.741576  240676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:30:00.741595  240676 out.go:309] Setting ErrFile to fd 2...
	I0911 11:30:00.741605  240676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:30:00.741855  240676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:30:00.742066  240676 out.go:303] Setting JSON to false
	I0911 11:30:00.742123  240676 mustload.go:65] Loading cluster: multinode-517978
	I0911 11:30:00.742238  240676 notify.go:220] Checking for updates...
	I0911 11:30:00.742689  240676 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:30:00.742708  240676 status.go:255] checking status of multinode-517978 ...
	I0911 11:30:00.743155  240676 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:30:00.760392  240676 status.go:330] multinode-517978 host status = "Running" (err=<nil>)
	I0911 11:30:00.760430  240676 host.go:66] Checking if "multinode-517978" exists ...
	I0911 11:30:00.760663  240676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978
	I0911 11:30:00.782226  240676 host.go:66] Checking if "multinode-517978" exists ...
	I0911 11:30:00.782507  240676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:30:00.782544  240676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978
	I0911 11:30:00.801953  240676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978/id_rsa Username:docker}
	I0911 11:30:00.891409  240676 ssh_runner.go:195] Run: systemctl --version
	I0911 11:30:00.895378  240676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:30:00.905444  240676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:30:00.954694  240676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-09-11 11:30:00.946632772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:30:00.955206  240676 kubeconfig.go:92] found "multinode-517978" server: "https://192.168.58.2:8443"
	I0911 11:30:00.955226  240676 api_server.go:166] Checking apiserver status ...
	I0911 11:30:00.955255  240676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:30:00.965868  240676 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0911 11:30:00.974223  240676 api_server.go:182] apiserver freezer: "7:freezer:/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio/crio-1af18db79efe940ada9f3a8f36848b61b1b23ecf891c17ded868ff0de28a4905"
	I0911 11:30:00.974292  240676 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0e320f19ce0ce722709872eed592d3b3d1159c017b64f129d929e2069d1e41e1/crio/crio-1af18db79efe940ada9f3a8f36848b61b1b23ecf891c17ded868ff0de28a4905/freezer.state
	I0911 11:30:00.981640  240676 api_server.go:204] freezer state: "THAWED"
	I0911 11:30:00.981671  240676 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0911 11:30:00.986164  240676 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0911 11:30:00.986188  240676 status.go:421] multinode-517978 apiserver status = Running (err=<nil>)
	I0911 11:30:00.986198  240676 status.go:257] multinode-517978 status: &{Name:multinode-517978 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0911 11:30:00.986214  240676 status.go:255] checking status of multinode-517978-m02 ...
	I0911 11:30:00.986448  240676 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Status}}
	I0911 11:30:01.003147  240676 status.go:330] multinode-517978-m02 host status = "Running" (err=<nil>)
	I0911 11:30:01.003170  240676 host.go:66] Checking if "multinode-517978-m02" exists ...
	I0911 11:30:01.003480  240676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-517978-m02
	I0911 11:30:01.020259  240676 host.go:66] Checking if "multinode-517978-m02" exists ...
	I0911 11:30:01.020504  240676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:30:01.020541  240676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-517978-m02
	I0911 11:30:01.037751  240676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17223-136166/.minikube/machines/multinode-517978-m02/id_rsa Username:docker}
	I0911 11:30:01.126893  240676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:30:01.136818  240676 status.go:257] multinode-517978-m02 status: &{Name:multinode-517978-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0911 11:30:01.136847  240676 status.go:255] checking status of multinode-517978-m03 ...
	I0911 11:30:01.137109  240676 cli_runner.go:164] Run: docker container inspect multinode-517978-m03 --format={{.State.Status}}
	I0911 11:30:01.153264  240676 status.go:330] multinode-517978-m03 host status = "Stopped" (err=<nil>)
	I0911 11:30:01.153286  240676 status.go:343] host is not running, skipping remaining checks
	I0911 11:30:01.153294  240676 status.go:257] multinode-517978-m03 status: &{Name:multinode-517978-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-517978 node start m03 --alsologtostderr: (10.228484857s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517978
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-517978
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-517978: (24.792521622s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517978 --wait=true -v=8 --alsologtostderr
E0911 11:31:32.891077  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517978 --wait=true -v=8 --alsologtostderr: (1m26.724188895s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517978
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-517978 node delete m03: (4.061942448s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 stop
E0911 11:32:22.263880  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-517978 stop: (23.624158512s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517978 status: exit status 7 (79.138625ms)

                                                
                                                
-- stdout --
	multinode-517978
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-517978-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr: exit status 7 (78.89839ms)

                                                
                                                
-- stdout --
	multinode-517978
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-517978-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:32:32.034065  250858 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:32:32.034247  250858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:32:32.034258  250858 out.go:309] Setting ErrFile to fd 2...
	I0911 11:32:32.034265  250858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:32:32.034471  250858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:32:32.034655  250858 out.go:303] Setting JSON to false
	I0911 11:32:32.034697  250858 mustload.go:65] Loading cluster: multinode-517978
	I0911 11:32:32.034803  250858 notify.go:220] Checking for updates...
	I0911 11:32:32.035120  250858 config.go:182] Loaded profile config "multinode-517978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:32:32.035138  250858 status.go:255] checking status of multinode-517978 ...
	I0911 11:32:32.035557  250858 cli_runner.go:164] Run: docker container inspect multinode-517978 --format={{.State.Status}}
	I0911 11:32:32.053105  250858 status.go:330] multinode-517978 host status = "Stopped" (err=<nil>)
	I0911 11:32:32.053140  250858 status.go:343] host is not running, skipping remaining checks
	I0911 11:32:32.053148  250858 status.go:257] multinode-517978 status: &{Name:multinode-517978 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0911 11:32:32.053202  250858 status.go:255] checking status of multinode-517978-m02 ...
	I0911 11:32:32.053551  250858 cli_runner.go:164] Run: docker container inspect multinode-517978-m02 --format={{.State.Status}}
	I0911 11:32:32.069972  250858 status.go:330] multinode-517978-m02 host status = "Stopped" (err=<nil>)
	I0911 11:32:32.069995  250858 status.go:343] host is not running, skipping remaining checks
	I0911 11:32:32.070001  250858 status.go:257] multinode-517978-m02 status: &{Name:multinode-517978-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517978 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0911 11:32:55.937133  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517978 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.62641497s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517978 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517978
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517978-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-517978-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.149461ms)

                                                
                                                
-- stdout --
	* [multinode-517978-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-517978-m02' is duplicated with machine name 'multinode-517978-m02' in profile 'multinode-517978'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517978-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517978-m03 --driver=docker  --container-runtime=crio: (24.76722628s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-517978
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-517978: exit status 80 (266.646412ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-517978
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-517978-m03 already exists in multinode-517978-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-517978-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-517978-m03: (1.852202118s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.99s)

                                                
                                    
x
+
TestPreload (150.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-139015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0911 11:34:30.135864  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-139015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m13.709902869s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-139015 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-139015 image pull gcr.io/k8s-minikube/busybox: (1.627535625s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-139015
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-139015: (5.653085771s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-139015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0911 11:36:32.891406  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-139015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m6.766151422s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-139015 image list
helpers_test.go:175: Cleaning up "test-preload-139015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-139015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-139015: (2.21010838s)
--- PASS: TestPreload (150.17s)

                                                
                                    
x
+
TestScheduledStopUnix (96.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-887818 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-887818 --memory=2048 --driver=docker  --container-runtime=crio: (21.082399017s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887818 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-887818 -n scheduled-stop-887818
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887818 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887818 --cancel-scheduled
E0911 11:37:22.263329  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887818 -n scheduled-stop-887818
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-887818
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887818 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-887818
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-887818: exit status 7 (66.018129ms)

                                                
                                                
-- stdout --
	scheduled-stop-887818
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887818 -n scheduled-stop-887818
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887818 -n scheduled-stop-887818: exit status 7 (58.536799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-887818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-887818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-887818: (4.077170644s)
--- PASS: TestScheduledStopUnix (96.41s)

                                                
                                    
x
+
TestInsufficientStorage (12.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-580321 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-580321 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.652901826s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"badc8eb4-a614-4d70-b509-db382b954065","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-580321] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c2955a9-5f23-4f86-9eca-c65974bfa368","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17223"}}
	{"specversion":"1.0","id":"f4553155-2a29-4fa3-933c-8bf9775abdcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2e57130-0bb3-4013-a823-6e8c9da68ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig"}}
	{"specversion":"1.0","id":"5a7b83ce-eb45-429f-a3fc-2415b33e3724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube"}}
	{"specversion":"1.0","id":"ed93cb50-af6e-4cee-aa6d-dc71f3b2c675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"09e299f1-dac2-4a5f-a5de-771b453a6abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1989fa1e-5e61-4bd7-94ac-ff11d62aa4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c564515a-b03c-4ad6-9292-1a09861d31fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d05f9ab7-b198-4962-8a38-924cab7dd772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3feb30df-9f42-44cd-a5ce-9396fda6333b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"06e66da2-4e19-4812-a6bc-3ea50ccd5cb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-580321 in cluster insufficient-storage-580321","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d672b1a3-30fe-4bda-b4d2-517404f7dba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cd90aa3-0d4d-4c65-ac2a-a928d225add9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"326f3cd2-c330-4e37-bb6e-4f124664991e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-580321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-580321 --output=json --layout=cluster: exit status 7 (261.171217ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-580321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-580321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:38:41.344877  272333 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-580321" does not appear in /home/jenkins/minikube-integration/17223-136166/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-580321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-580321 --output=json --layout=cluster: exit status 7 (262.166139ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-580321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-580321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:38:41.606638  272432 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-580321" does not appear in /home/jenkins/minikube-integration/17223-136166/kubeconfig
	E0911 11:38:41.616875  272432 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/insufficient-storage-580321/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-580321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-580321
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-580321: (1.817195604s)
--- PASS: TestInsufficientStorage (12.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (357.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0911 11:40:53.181659  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.654191876s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-872265
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-872265: (4.714414255s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-872265 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-872265 status --format={{.Host}}: exit status 7 (84.384138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.444738777s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-872265 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (72.072793ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-872265] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-872265
	    minikube start -p kubernetes-upgrade-872265 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8722652 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-872265 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872265 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.304777123s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-872265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-872265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-872265: (2.204746861s)
--- PASS: TestKubernetesUpgrade (357.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2425326232.exe start -p missing-upgrade-782427 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2425326232.exe start -p missing-upgrade-782427 --memory=2200 --driver=docker  --container-runtime=crio: (1m17.510931956s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-782427
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-782427: (3.136560384s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-782427
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-782427 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-782427 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (59.683944927s)
helpers_test.go:175: Cleaning up "missing-upgrade-782427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-782427
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-782427: (3.655818465s)
--- PASS: TestMissingContainerUpgrade (144.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (82.663738ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-341786] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341786 --driver=docker  --container-runtime=crio
E0911 11:38:45.309536  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341786 --driver=docker  --container-runtime=crio: (35.101670476s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-341786 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-917885 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-917885 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (198.252688ms)

                                                
                                                
-- stdout --
	* [false-917885] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:38:47.172532  274545 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:38:47.172659  274545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:38:47.172672  274545 out.go:309] Setting ErrFile to fd 2...
	I0911 11:38:47.172679  274545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:38:47.173001  274545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-136166/.minikube/bin
	I0911 11:38:47.173666  274545 out.go:303] Setting JSON to false
	I0911 11:38:47.175540  274545 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4875,"bootTime":1694427452,"procs":650,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:38:47.175636  274545 start.go:138] virtualization: kvm guest
	I0911 11:38:47.178268  274545 out.go:177] * [false-917885] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:38:47.180617  274545 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:38:47.180584  274545 notify.go:220] Checking for updates...
	I0911 11:38:47.182965  274545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:38:47.184683  274545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-136166/kubeconfig
	I0911 11:38:47.186110  274545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-136166/.minikube
	I0911 11:38:47.187722  274545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:38:47.189275  274545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:38:47.193673  274545 config.go:182] Loaded profile config "NoKubernetes-341786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:38:47.193863  274545 config.go:182] Loaded profile config "force-systemd-env-345094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:38:47.194038  274545 config.go:182] Loaded profile config "offline-crio-341798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:38:47.194203  274545 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:38:47.225395  274545 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0911 11:38:47.225496  274545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0911 11:38:47.311340  274545 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:75 SystemTime:2023-09-11 11:38:47.301011697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0911 11:38:47.311471  274545 docker.go:294] overlay module found
	I0911 11:38:47.314498  274545 out.go:177] * Using the docker driver based on user configuration
	I0911 11:38:47.316219  274545 start.go:298] selected driver: docker
	I0911 11:38:47.316235  274545 start.go:902] validating driver "docker" against <nil>
	I0911 11:38:47.316251  274545 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:38:47.318957  274545 out.go:177] 
	W0911 11:38:47.320670  274545 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0911 11:38:47.322283  274545 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-917885 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-917885

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-917885"

                                                
                                                
----------------------- debugLogs end: false-917885 [took: 8.451976831s] --------------------------------
helpers_test.go:175: Cleaning up "false-917885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-917885
--- PASS: TestNetworkPlugins/group/false (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --driver=docker  --container-runtime=crio: (16.556794907s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-341786 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-341786 status -o json: exit status 2 (273.892915ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-341786","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-341786
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-341786: (1.924178106s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341786 --no-kubernetes --driver=docker  --container-runtime=crio: (4.683111724s)
--- PASS: TestNoKubernetes/serial/Start (4.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-341786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-341786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.648427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-341786
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-341786: (1.241997347s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341786 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341786 --driver=docker  --container-runtime=crio: (7.961366355s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-341786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-341786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.668183ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-822606
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                    
x
+
TestPause/serial/Start (44.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-844693 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0911 11:42:22.263383  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-844693 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.076806416s)
--- PASS: TestPause/serial/Start (44.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.559264521s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.480280039s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.30993708s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k8vtg" [38c1e634-c4c3-460f-9416-de764d8e5ecd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k8vtg" [38c1e634-c4c3-460f-9416-de764d8e5ecd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010472382s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-99bhd" [723b2055-423b-41e7-b2b6-82865a0afb99] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018057775s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-js2x9" [d68bbb57-7089-4b6a-b264-e5067cd7c3ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-js2x9" [d68bbb57-7089-4b6a-b264-e5067cd7c3ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.009519944s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.554600163s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.168251379s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g2rsb" [1078166f-3d4e-46b3-ad85-11726d8abd7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.030905602s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zngdk" [75cd9810-6359-4534-8def-f1c659554fa2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zngdk" [75cd9810-6359-4534-8def-f1c659554fa2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009573431s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.631807823s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-62h88" [e84deec8-0707-43a1-88ad-818e2f61b626] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-62h88" [e84deec8-0707-43a1-88ad-818e2f61b626] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.010201891s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-917885 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.024710989s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7xbhh" [4365c1c7-124a-4d62-bf87-6686bb5d96ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7xbhh" [4365c1c7-124a-4d62-bf87-6686bb5d96ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.01237849s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (126.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-416610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-416610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m6.250630414s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (126.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fp8dt" [88f18833-a1e6-4a7a-8b72-1c256d9bde1f] Running
E0911 11:46:32.892008  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019763538s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lgwlk" [473a0229-4e04-4bf4-93ee-73015a501b45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lgwlk" [473a0229-4e04-4bf4-93ee-73015a501b45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008589134s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-986245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-986245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m8.283027931s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-917885 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-917885 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pkkvw" [c8ba11dc-3e15-4aac-b058-848f87d92eaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pkkvw" [c8ba11dc-3e15-4aac-b058-848f87d92eaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010790583s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (33.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-917885 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-917885 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162034423s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-917885 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-917885 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.180307336s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-917885 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (33.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-903777 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 11:47:22.263529  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-903777 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (41.830664498s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-917885 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0911 11:54:27.527880  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:54:30.136518  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:54:38.498285  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:54:41.268480  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-678832 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-678832 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m7.946838027s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-986245 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4cdf2424-ef3d-4542-8a20-12117da515ca] Pending
helpers_test.go:344: "busybox" [4cdf2424-ef3d-4542-8a20-12117da515ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4cdf2424-ef3d-4542-8a20-12117da515ca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.025034744s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-986245 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903777 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7e5055c-f654-49e7-aa1a-92895cbea535] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7e5055c-f654-49e7-aa1a-92895cbea535] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017466036s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903777 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-986245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-986245 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-986245 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-986245 --alsologtostderr -v=3: (12.00574394s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-903777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-903777 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-903777 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-903777 --alsologtostderr -v=3: (12.062333225s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-986245 -n no-preload-986245
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-986245 -n no-preload-986245: exit status 7 (60.463304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-986245 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-986245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-986245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m34.737050921s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-986245 -n no-preload-986245
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903777 -n embed-certs-903777
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903777 -n embed-certs-903777: exit status 7 (65.257045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-903777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-903777 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-903777 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m38.57293415s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903777 -n embed-certs-903777
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-416610 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1c238d57-b8ef-4cce-a6a7-eccc2c952eb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1c238d57-b8ef-4cce-a6a7-eccc2c952eb6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.015271839s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-416610 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-416610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-416610 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-416610 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-416610 --alsologtostderr -v=3: (12.015742122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416610 -n old-k8s-version-416610
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416610 -n old-k8s-version-416610: exit status 7 (62.638277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-416610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (398.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-416610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-416610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m38.307288774s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-416610 -n old-k8s-version-416610
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (398.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-678832 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [79add892-d9f5-4437-acb0-f6fc4b28e455] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [79add892-d9f5-4437-acb0-f6fc4b28e455] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016470458s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-678832 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-678832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-678832 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-678832 --alsologtostderr -v=3
E0911 11:49:10.812970  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:10.818220  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:10.828491  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:10.848734  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:10.889013  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:10.970126  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:11.130804  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:11.451268  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:12.092410  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:13.373398  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:13.583930  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.589243  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.599549  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.619856  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.660189  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.740587  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:13.900781  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:14.221361  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:14.861740  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-678832 --alsologtostderr -v=3: (12.359420653s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832: exit status 7 (66.082947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-678832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-678832 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 11:49:15.934383  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:16.142941  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:18.704025  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:21.055184  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:23.824283  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:30.136101  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/ingress-addon-legacy-452365/client.crt: no such file or directory
E0911 11:49:31.295828  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:34.065391  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:35.937465  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:49:51.776167  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:49:54.545782  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:49:56.803537  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:56.808849  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:56.819259  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:56.839608  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:56.879916  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:56.960929  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:57.121339  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:57.441773  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:58.082572  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:49:59.362885  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:50:01.923499  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:50:07.043980  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:50:17.284453  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:50:32.736656  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:50:35.506760  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:50:37.765297  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:50:44.195152  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.200460  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.210762  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.231049  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.271737  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.352091  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.512789  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:44.833373  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:45.474290  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:46.755305  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:49.315761  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:50:54.436489  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:51:04.676790  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:51:07.702518  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:07.707771  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:07.718067  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:07.738342  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:07.778632  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:07.858958  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:08.019268  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:08.340229  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:08.981154  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:10.262110  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:12.822828  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:17.943178  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:18.725935  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:51:25.157522  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:51:28.184348  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:32.376400  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.381708  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.391938  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.412263  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.452586  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.532766  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.693182  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:32.891602  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/addons-387581/client.crt: no such file or directory
E0911 11:51:33.014174  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:33.655108  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:34.935770  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:37.496391  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:42.617254  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:43.684513  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:43.689808  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:43.700091  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:43.720412  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:43.760712  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:43.841045  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:44.001468  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:44.322295  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:44.963218  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:46.244135  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:48.665464  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:51:48.804729  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:52.858276  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:51:53.925531  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:51:54.657796  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
E0911 11:51:57.427886  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
E0911 11:52:04.166125  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:52:06.118336  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
E0911 11:52:13.338533  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:52:22.263039  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/functional-224127/client.crt: no such file or directory
E0911 11:52:24.647295  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:52:29.626135  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
E0911 11:52:40.646287  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
E0911 11:52:54.298977  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
E0911 11:53:05.607663  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/bridge-917885/client.crt: no such file or directory
E0911 11:53:28.039001  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/custom-flannel-917885/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-678832 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m39.737986794s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d4th9" [0eff5651-2be7-4cc7-8989-96e13857f8ba] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0911 11:53:51.547118  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/enable-default-cni-917885/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d4th9" [0eff5651-2be7-4cc7-8989-96e13857f8ba] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.017134371s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-74pnz" [f50bd6b7-7bb4-4a95-95e2-928e3396b16f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-74pnz" [f50bd6b7-7bb4-4a95-95e2-928e3396b16f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.071819203s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d4th9" [0eff5651-2be7-4cc7-8989-96e13857f8ba] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009680309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-986245 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-986245 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-986245 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-986245 -n no-preload-986245
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-986245 -n no-preload-986245: exit status 2 (297.285401ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-986245 -n no-preload-986245
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-986245 -n no-preload-986245: exit status 2 (296.385653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-986245 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-986245 -n no-preload-986245
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-986245 -n no-preload-986245
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-74pnz" [f50bd6b7-7bb4-4a95-95e2-928e3396b16f] Running
E0911 11:54:10.812236  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/auto-917885/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010008811s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-903777 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-043718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 11:54:13.583378  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/kindnet-917885/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-043718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (38.145444176s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-903777 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-903777 --alsologtostderr -v=1
E0911 11:54:16.220167  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/flannel-917885/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903777 -n embed-certs-903777
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903777 -n embed-certs-903777: exit status 2 (322.711836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-903777 -n embed-certs-903777
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-903777 -n embed-certs-903777: exit status 2 (316.605369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-903777 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903777 -n embed-certs-903777
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-903777 -n embed-certs-903777
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-043718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-043718 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-043718 --alsologtostderr -v=3: (2.239934086s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-043718 -n newest-cni-043718
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-043718 -n newest-cni-043718: exit status 7 (80.203918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-043718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-043718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-043718 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (25.969677124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-043718 -n newest-cni-043718
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pxsk7" [9c8d2806-4bb7-426a-bd17-379f9aba9f18] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0911 11:54:56.803759  143417 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-136166/.minikube/profiles/calico-917885/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pxsk7" [9c8d2806-4bb7-426a-bd17-379f9aba9f18] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.016564442s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pxsk7" [9c8d2806-4bb7-426a-bd17-379f9aba9f18] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009821179s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-678832 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-678832 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-678832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832: exit status 2 (315.877517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832: exit status 2 (312.636497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-678832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-678832 -n default-k8s-diff-port-678832
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-043718 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-043718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-043718 -n newest-cni-043718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-043718 -n newest-cni-043718: exit status 2 (280.50092ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-043718 -n newest-cni-043718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-043718 -n newest-cni-043718: exit status 2 (279.845198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-043718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-043718 -n newest-cni-043718
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-043718 -n newest-cni-043718
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9p4kl" [3eecbd11-4cd9-4a4d-9108-2908e2a2821c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017496046s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9p4kl" [3eecbd11-4cd9-4a4d-9108-2908e2a2821c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008973673s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-416610 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-416610 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-416610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416610 -n old-k8s-version-416610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416610 -n old-k8s-version-416610: exit status 2 (280.318505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416610 -n old-k8s-version-416610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416610 -n old-k8s-version-416610: exit status 2 (280.931176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-416610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-416610 -n old-k8s-version-416610
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-416610 -n old-k8s-version-416610
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    

Test skip (24/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-917885 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-917885

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-917885"

                                                
                                                
----------------------- debugLogs end: kubenet-917885 [took: 3.496713748s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-917885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-917885
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-917885 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-917885" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-917885

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-917885" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-917885"

                                                
                                                
----------------------- debugLogs end: cilium-917885 [took: 3.491434672s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-917885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-917885
--- SKIP: TestNetworkPlugins/group/cilium (3.62s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-289000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard