Test Report: Docker_Linux_crio_arm64 17340

                    
                      49babfe4fcdff3bcc398a25366bae00d3ae6dc66:2023-10-02:31256
                    
                

Test fail (7/299)

Order failed test Duration
25 TestAddons/parallel/Ingress 170.56
155 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.35
205 TestMultiNode/serial/PingHostFrom2Pods 4.52
226 TestRunningBinaryUpgrade 69.06
229 TestMissingContainerUpgrade 149.78
252 TestStoppedBinaryUpgrade/Upgrade 71.98
263 TestPause/serial/SecondStartNoReconfiguration 49.01
x
+
TestAddons/parallel/Ingress (170.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-346248 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-346248 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-346248 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [712f9658-eebe-483f-adee-b6a40d66ef9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [712f9658-eebe-483f-adee-b6a40d66ef9c] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.024504624s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-346248 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.881620622s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context addons-346248 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.037715654s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p addons-346248 addons disable ingress-dns --alsologtostderr -v=1: (1.23513554s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p addons-346248 addons disable ingress --alsologtostderr -v=1: (7.762939003s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-346248
helpers_test.go:235: (dbg) docker inspect addons-346248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615",
	        "Created": "2023-10-02T11:39:45.422005466Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2500593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T11:39:45.776630741Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615/hostname",
	        "HostsPath": "/var/lib/docker/containers/10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615/hosts",
	        "LogPath": "/var/lib/docker/containers/10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615/10d010d0d3a5ce7640728cda5294e769f949929a8cab887a898059dc6273e615-json.log",
	        "Name": "/addons-346248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-346248:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-346248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d906f0c5839604202e05f90e5c1cb337bd2a3e7ab28d86683b1dc929950384c2-init/diff:/var/lib/docker/overlay2/1ffc828a09df1e9fa25f5092ba7b162a0fa5a6fe031a41b1f614792625eb1522/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d906f0c5839604202e05f90e5c1cb337bd2a3e7ab28d86683b1dc929950384c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d906f0c5839604202e05f90e5c1cb337bd2a3e7ab28d86683b1dc929950384c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d906f0c5839604202e05f90e5c1cb337bd2a3e7ab28d86683b1dc929950384c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-346248",
	                "Source": "/var/lib/docker/volumes/addons-346248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-346248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-346248",
	                "name.minikube.sigs.k8s.io": "addons-346248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "277fb8f1a05a069fc4d1fce53540d2c11b5f46baa32da699d16657238f8746ca",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35872"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35871"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35868"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35869"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/277fb8f1a05a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-346248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "10d010d0d3a5",
	                        "addons-346248"
	                    ],
	                    "NetworkID": "d9d1a80799b64b24d3989d1982230ea474dd6eaae49dfeb4ef58bbaa301cb875",
	                    "EndpointID": "bf65c20d84a53296db51f8826dc1e8d23117424b91455eec401e437cd50c0425",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-346248 -n addons-346248
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-346248 logs -n 25: (1.736786495s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-490357   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | -p download-only-490357                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-490357   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | -p download-only-490357                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| delete  | -p download-only-490357                                                                     | download-only-490357   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| delete  | -p download-only-490357                                                                     | download-only-490357   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-791861 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | download-docker-791861                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-791861                                                                   | download-docker-791861 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-393333   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | binary-mirror-393333                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46105                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-393333                                                                     | binary-mirror-393333   | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| start   | -p addons-346248 --wait=true                                                                | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-346248 ssh cat                                                                       | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | /opt/local-path-provisioner/pvc-630de221-724f-4414-8f37-7bb6fe233ffc_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-346248 addons disable                                                                | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-346248 addons                                                                        | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-346248 ip                                                                            | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	| addons  | addons-346248 addons disable                                                                | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | addons-346248                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | -p addons-346248                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-346248 ssh curl -s                                                                   | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:42 UTC | 02 Oct 23 11:42 UTC |
	|         | addons-346248                                                                               |                        |         |         |                     |                     |
	| addons  | addons-346248 addons                                                                        | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:43 UTC | 02 Oct 23 11:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-346248 addons                                                                        | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:43 UTC | 02 Oct 23 11:43 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-346248 ip                                                                            | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:44 UTC | 02 Oct 23 11:44 UTC |
	| addons  | addons-346248 addons disable                                                                | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:44 UTC | 02 Oct 23 11:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-346248 addons disable                                                                | addons-346248          | jenkins | v1.31.2 | 02 Oct 23 11:44 UTC | 02 Oct 23 11:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:39:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:39:37.931508 2500131 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:39:37.931774 2500131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:37.931805 2500131 out.go:309] Setting ErrFile to fd 2...
	I1002 11:39:37.931825 2500131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:37.932114 2500131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 11:39:37.932716 2500131 out.go:303] Setting JSON to false
	I1002 11:39:37.933739 2500131 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":69724,"bootTime":1696177054,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:39:37.933844 2500131 start.go:138] virtualization:  
	I1002 11:39:37.936801 2500131 out.go:177] * [addons-346248] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 11:39:37.939454 2500131 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:39:37.939639 2500131 notify.go:220] Checking for updates...
	I1002 11:39:37.941951 2500131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:39:37.943964 2500131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:39:37.945756 2500131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:39:37.948259 2500131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 11:39:37.949994 2500131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:39:37.952021 2500131 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:39:37.976247 2500131 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:39:37.976349 2500131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:38.063040 2500131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:50 SystemTime:2023-10-02 11:39:38.05069253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:38.063193 2500131 docker.go:294] overlay module found
	I1002 11:39:38.065406 2500131 out.go:177] * Using the docker driver based on user configuration
	I1002 11:39:38.067306 2500131 start.go:298] selected driver: docker
	I1002 11:39:38.067328 2500131 start.go:902] validating driver "docker" against <nil>
	I1002 11:39:38.067349 2500131 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:39:38.068059 2500131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:38.143474 2500131 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:50 SystemTime:2023-10-02 11:39:38.132949554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:38.143651 2500131 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 11:39:38.143884 2500131 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:39:38.146132 2500131 out.go:177] * Using Docker driver with root privileges
	I1002 11:39:38.148210 2500131 cni.go:84] Creating CNI manager for ""
	I1002 11:39:38.148236 2500131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:39:38.148250 2500131 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 11:39:38.148263 2500131 start_flags.go:321] config:
	{Name:addons-346248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-346248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:38.150505 2500131 out.go:177] * Starting control plane node addons-346248 in cluster addons-346248
	I1002 11:39:38.152267 2500131 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 11:39:38.154072 2500131 out.go:177] * Pulling base image ...
	I1002 11:39:38.155780 2500131 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 11:39:38.155947 2500131 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:38.155981 2500131 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 11:39:38.155994 2500131 cache.go:57] Caching tarball of preloaded images
	I1002 11:39:38.156086 2500131 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 11:39:38.156103 2500131 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:39:38.156511 2500131 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/config.json ...
	I1002 11:39:38.156589 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/config.json: {Name:mk0328c03a605e81603b6fa81392e606a827637b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:38.174230 2500131 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 11:39:38.174260 2500131 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 11:39:38.174279 2500131 cache.go:195] Successfully downloaded all kic artifacts
	I1002 11:39:38.174359 2500131 start.go:365] acquiring machines lock for addons-346248: {Name:mk390d56e60b21a120826473e8e58d2966dbf27e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:39:38.174966 2500131 start.go:369] acquired machines lock for "addons-346248" in 578.142µs
	I1002 11:39:38.175015 2500131 start.go:93] Provisioning new machine with config: &{Name:addons-346248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-346248 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:39:38.175115 2500131 start.go:125] createHost starting for "" (driver="docker")
	I1002 11:39:38.177822 2500131 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1002 11:39:38.178201 2500131 start.go:159] libmachine.API.Create for "addons-346248" (driver="docker")
	I1002 11:39:38.178232 2500131 client.go:168] LocalClient.Create starting
	I1002 11:39:38.178400 2500131 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem
	I1002 11:39:38.546164 2500131 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem
	I1002 11:39:39.574226 2500131 cli_runner.go:164] Run: docker network inspect addons-346248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 11:39:39.591926 2500131 cli_runner.go:211] docker network inspect addons-346248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 11:39:39.592041 2500131 network_create.go:281] running [docker network inspect addons-346248] to gather additional debugging logs...
	I1002 11:39:39.592070 2500131 cli_runner.go:164] Run: docker network inspect addons-346248
	W1002 11:39:39.610154 2500131 cli_runner.go:211] docker network inspect addons-346248 returned with exit code 1
	I1002 11:39:39.610191 2500131 network_create.go:284] error running [docker network inspect addons-346248]: docker network inspect addons-346248: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-346248 not found
	I1002 11:39:39.610218 2500131 network_create.go:286] output of [docker network inspect addons-346248]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-346248 not found
	
	** /stderr **
	I1002 11:39:39.610286 2500131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 11:39:39.629469 2500131 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000fbb940}
	I1002 11:39:39.629509 2500131 network_create.go:123] attempt to create docker network addons-346248 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 11:39:39.629572 2500131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-346248 addons-346248
	I1002 11:39:39.701251 2500131 network_create.go:107] docker network addons-346248 192.168.49.0/24 created
	I1002 11:39:39.701288 2500131 kic.go:117] calculated static IP "192.168.49.2" for the "addons-346248" container
	I1002 11:39:39.701372 2500131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 11:39:39.721820 2500131 cli_runner.go:164] Run: docker volume create addons-346248 --label name.minikube.sigs.k8s.io=addons-346248 --label created_by.minikube.sigs.k8s.io=true
	I1002 11:39:39.742817 2500131 oci.go:103] Successfully created a docker volume addons-346248
	I1002 11:39:39.742910 2500131 cli_runner.go:164] Run: docker run --rm --name addons-346248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-346248 --entrypoint /usr/bin/test -v addons-346248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 11:39:41.140939 2500131 cli_runner.go:217] Completed: docker run --rm --name addons-346248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-346248 --entrypoint /usr/bin/test -v addons-346248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.397986633s)
	I1002 11:39:41.140970 2500131 oci.go:107] Successfully prepared a docker volume addons-346248
	I1002 11:39:41.140998 2500131 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:41.141018 2500131 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 11:39:41.141108 2500131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-346248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 11:39:45.327657 2500131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-346248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.18650252s)
	I1002 11:39:45.327696 2500131 kic.go:199] duration metric: took 4.186675 seconds to extract preloaded images to volume
	W1002 11:39:45.327859 2500131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 11:39:45.327984 2500131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 11:39:45.403608 2500131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-346248 --name addons-346248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-346248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-346248 --network addons-346248 --ip 192.168.49.2 --volume addons-346248:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 11:39:45.786301 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Running}}
	I1002 11:39:45.809626 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:39:45.838419 2500131 cli_runner.go:164] Run: docker exec addons-346248 stat /var/lib/dpkg/alternatives/iptables
	I1002 11:39:45.911249 2500131 oci.go:144] the created container "addons-346248" has a running status.
	I1002 11:39:45.911276 2500131 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa...
	I1002 11:39:46.537087 2500131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 11:39:46.567717 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:39:46.593315 2500131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 11:39:46.593334 2500131 kic_runner.go:114] Args: [docker exec --privileged addons-346248 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 11:39:46.700198 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:39:46.736633 2500131 machine.go:88] provisioning docker machine ...
	I1002 11:39:46.736664 2500131 ubuntu.go:169] provisioning hostname "addons-346248"
	I1002 11:39:46.736737 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:46.775228 2500131 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:46.775660 2500131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35872 <nil> <nil>}
	I1002 11:39:46.775674 2500131 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-346248 && echo "addons-346248" | sudo tee /etc/hostname
	I1002 11:39:46.946095 2500131 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-346248
	
	I1002 11:39:46.946174 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:46.980384 2500131 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:46.980954 2500131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35872 <nil> <nil>}
	I1002 11:39:46.980986 2500131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-346248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-346248/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-346248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:39:47.130155 2500131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:39:47.130185 2500131 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 11:39:47.130217 2500131 ubuntu.go:177] setting up certificates
	I1002 11:39:47.130226 2500131 provision.go:83] configureAuth start
	I1002 11:39:47.130290 2500131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-346248
	I1002 11:39:47.150633 2500131 provision.go:138] copyHostCerts
	I1002 11:39:47.150720 2500131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 11:39:47.150852 2500131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 11:39:47.150914 2500131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 11:39:47.150959 2500131 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.addons-346248 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-346248]
	I1002 11:39:47.652345 2500131 provision.go:172] copyRemoteCerts
	I1002 11:39:47.652422 2500131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:39:47.652464 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:47.670414 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:39:47.771744 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:39:47.801613 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 11:39:47.831209 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:39:47.859848 2500131 provision.go:86] duration metric: configureAuth took 729.58539ms
	I1002 11:39:47.859873 2500131 ubuntu.go:193] setting minikube options for container-runtime
	I1002 11:39:47.860063 2500131 config.go:182] Loaded profile config "addons-346248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:39:47.860165 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:47.877851 2500131 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:47.878257 2500131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35872 <nil> <nil>}
	I1002 11:39:47.878281 2500131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:39:48.220140 2500131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:39:48.220164 2500131 machine.go:91] provisioned docker machine in 1.483510278s
	I1002 11:39:48.220175 2500131 client.go:171] LocalClient.Create took 10.041924509s
	I1002 11:39:48.220187 2500131 start.go:167] duration metric: libmachine.API.Create for "addons-346248" took 10.041986671s
	I1002 11:39:48.220195 2500131 start.go:300] post-start starting for "addons-346248" (driver="docker")
	I1002 11:39:48.220204 2500131 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:39:48.220272 2500131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:39:48.220320 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:48.243695 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:39:48.343758 2500131 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:39:48.348058 2500131 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 11:39:48.348098 2500131 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 11:39:48.348111 2500131 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 11:39:48.348119 2500131 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 11:39:48.348132 2500131 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 11:39:48.348208 2500131 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 11:39:48.348233 2500131 start.go:303] post-start completed in 128.032318ms
	I1002 11:39:48.348604 2500131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-346248
	I1002 11:39:48.366509 2500131 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/config.json ...
	I1002 11:39:48.366789 2500131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 11:39:48.366839 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:48.384663 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:39:48.478898 2500131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 11:39:48.484871 2500131 start.go:128] duration metric: createHost completed in 10.309738811s
	I1002 11:39:48.484941 2500131 start.go:83] releasing machines lock for "addons-346248", held for 10.309948796s
	I1002 11:39:48.485035 2500131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-346248
	I1002 11:39:48.502706 2500131 ssh_runner.go:195] Run: cat /version.json
	I1002 11:39:48.502759 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:48.502772 2500131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:39:48.502811 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:39:48.527722 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:39:48.532707 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:39:48.621006 2500131 ssh_runner.go:195] Run: systemctl --version
	I1002 11:39:48.760186 2500131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:39:48.911578 2500131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:39:48.917576 2500131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:39:48.942699 2500131 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 11:39:48.942870 2500131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:39:48.983986 2500131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 11:39:48.984009 2500131 start.go:469] detecting cgroup driver to use...
	I1002 11:39:48.984070 2500131 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 11:39:48.984144 2500131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:39:49.005458 2500131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:39:49.020303 2500131 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:39:49.020443 2500131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:39:49.037180 2500131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:39:49.054870 2500131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:39:49.165966 2500131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:39:49.275583 2500131 docker.go:213] disabling docker service ...
	I1002 11:39:49.275663 2500131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:39:49.297697 2500131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:39:49.311876 2500131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:39:49.415356 2500131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:39:49.527546 2500131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:39:49.541559 2500131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:39:49.562466 2500131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:39:49.562539 2500131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:49.575258 2500131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:39:49.575327 2500131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:49.587389 2500131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:49.599562 2500131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:49.611476 2500131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:39:49.624677 2500131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:39:49.638436 2500131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:39:49.650022 2500131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:39:49.755748 2500131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:39:49.883975 2500131 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:39:49.884090 2500131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:39:49.889587 2500131 start.go:537] Will wait 60s for crictl version
	I1002 11:39:49.889659 2500131 ssh_runner.go:195] Run: which crictl
	I1002 11:39:49.894682 2500131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:39:49.946720 2500131 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 11:39:49.946884 2500131 ssh_runner.go:195] Run: crio --version
	I1002 11:39:49.999280 2500131 ssh_runner.go:195] Run: crio --version
	I1002 11:39:50.057012 2500131 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 11:39:50.059064 2500131 cli_runner.go:164] Run: docker network inspect addons-346248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 11:39:50.077475 2500131 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 11:39:50.083231 2500131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:39:50.100215 2500131 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:50.100296 2500131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:39:50.168761 2500131 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:39:50.168788 2500131 crio.go:415] Images already preloaded, skipping extraction
	I1002 11:39:50.168868 2500131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:39:50.210615 2500131 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:39:50.210640 2500131 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:39:50.210719 2500131 ssh_runner.go:195] Run: crio config
	I1002 11:39:50.266900 2500131 cni.go:84] Creating CNI manager for ""
	I1002 11:39:50.266923 2500131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:39:50.266954 2500131 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:39:50.266974 2500131 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-346248 NodeName:addons-346248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:39:50.267121 2500131 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-346248"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:39:50.267194 2500131 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-346248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-346248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:39:50.267261 2500131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:39:50.277971 2500131 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:39:50.278045 2500131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:39:50.288559 2500131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1002 11:39:50.309985 2500131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:39:50.332680 2500131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1002 11:39:50.355501 2500131 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 11:39:50.360303 2500131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:39:50.374440 2500131 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248 for IP: 192.168.49.2
	I1002 11:39:50.374472 2500131 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:50.374623 2500131 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 11:39:50.635240 2500131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt ...
	I1002 11:39:50.635270 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt: {Name:mk4bde90ada8b2c1f6eb38c8f23b996ba52c5ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:50.635838 2500131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key ...
	I1002 11:39:50.635855 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key: {Name:mk762c8d3267c94b2c114fa86a68844c764d80f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:50.635954 2500131 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 11:39:50.857920 2500131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt ...
	I1002 11:39:50.857952 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt: {Name:mk60b9b8d3a5e2cd99f0497dcaed293f128a1e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:50.858144 2500131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key ...
	I1002 11:39:50.858159 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key: {Name:mk6e37d5ea41c0ea70d1c9f1d6b6078f224b7fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:50.858282 2500131 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.key
	I1002 11:39:50.858299 2500131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt with IP's: []
	I1002 11:39:51.058420 2500131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt ...
	I1002 11:39:51.058458 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: {Name:mk7215a3c6559e21432429edbf6201f84c45f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:51.058683 2500131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.key ...
	I1002 11:39:51.058698 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.key: {Name:mk6e83d687c31723859ddcc78ec4212f44cca354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:51.059225 2500131 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key.dd3b5fb2
	I1002 11:39:51.059258 2500131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 11:39:51.765930 2500131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt.dd3b5fb2 ...
	I1002 11:39:51.765959 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt.dd3b5fb2: {Name:mk54082ad6326caa96af78e869c5dff9288f62b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:51.766147 2500131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key.dd3b5fb2 ...
	I1002 11:39:51.766162 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key.dd3b5fb2: {Name:mk07b165a03f6336f2c07e4fa418fb9d3a371f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:51.766250 2500131 certs.go:337] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt
	I1002 11:39:51.766325 2500131 certs.go:341] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key
	I1002 11:39:51.766381 2500131 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.key
	I1002 11:39:51.766408 2500131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.crt with IP's: []
	I1002 11:39:52.582817 2500131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.crt ...
	I1002 11:39:52.582852 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.crt: {Name:mka378761527cf700fbfc76cdcd10e1894ba8545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:52.583507 2500131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.key ...
	I1002 11:39:52.583527 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.key: {Name:mke803181031a985be2b28762bdd34acc717c9dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:52.583742 2500131 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:39:52.583788 2500131 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:39:52.583821 2500131 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:39:52.583851 2500131 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 11:39:52.584507 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:39:52.615489 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 11:39:52.645849 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:39:52.676094 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:39:52.707197 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:39:52.737527 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 11:39:52.767561 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:39:52.797126 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 11:39:52.827393 2500131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:39:52.856839 2500131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:39:52.878340 2500131 ssh_runner.go:195] Run: openssl version
	I1002 11:39:52.886062 2500131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:39:52.898532 2500131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:52.903213 2500131 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:52.903303 2500131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:52.912157 2500131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:39:52.924180 2500131 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:39:52.928814 2500131 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:39:52.928865 2500131 kubeadm.go:404] StartCluster: {Name:addons-346248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-346248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:52.928965 2500131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:39:52.929033 2500131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:39:52.974243 2500131 cri.go:89] found id: ""
	I1002 11:39:52.974317 2500131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:39:52.985073 2500131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:39:52.996203 2500131 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 11:39:52.996270 2500131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:39:53.009093 2500131 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:39:53.009135 2500131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 11:39:53.110671 2500131 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:39:53.192067 2500131 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:40:09.897159 2500131 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:40:09.897219 2500131 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:40:09.897304 2500131 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:40:09.897358 2500131 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:40:09.897395 2500131 kubeadm.go:322] OS: Linux
	I1002 11:40:09.897443 2500131 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 11:40:09.897493 2500131 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 11:40:09.897541 2500131 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 11:40:09.897589 2500131 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 11:40:09.897639 2500131 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 11:40:09.897687 2500131 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 11:40:09.897734 2500131 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1002 11:40:09.897783 2500131 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1002 11:40:09.897830 2500131 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1002 11:40:09.897923 2500131 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:40:09.898067 2500131 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:40:09.898165 2500131 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:40:09.898239 2500131 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:40:09.901510 2500131 out.go:204]   - Generating certificates and keys ...
	I1002 11:40:09.901606 2500131 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:40:09.901674 2500131 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:40:09.901744 2500131 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 11:40:09.901801 2500131 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 11:40:09.901859 2500131 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 11:40:09.901910 2500131 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 11:40:09.901964 2500131 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 11:40:09.902072 2500131 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-346248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 11:40:09.902121 2500131 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 11:40:09.902227 2500131 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-346248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 11:40:09.902288 2500131 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 11:40:09.902350 2500131 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 11:40:09.902391 2500131 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 11:40:09.902443 2500131 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:40:09.902490 2500131 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:40:09.902541 2500131 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:40:09.902601 2500131 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:40:09.902653 2500131 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:40:09.902733 2500131 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:40:09.902795 2500131 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:40:09.904645 2500131 out.go:204]   - Booting up control plane ...
	I1002 11:40:09.904824 2500131 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:40:09.904930 2500131 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:40:09.905045 2500131 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:40:09.905248 2500131 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:40:09.905380 2500131 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:40:09.905449 2500131 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:40:09.905644 2500131 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:40:09.905725 2500131 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003981 seconds
	I1002 11:40:09.905832 2500131 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:40:09.905965 2500131 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:40:09.906045 2500131 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:40:09.906228 2500131 kubeadm.go:322] [mark-control-plane] Marking the node addons-346248 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:40:09.906286 2500131 kubeadm.go:322] [bootstrap-token] Using token: xojxpk.e00p6250ox4qm8uw
	I1002 11:40:09.908477 2500131 out.go:204]   - Configuring RBAC rules ...
	I1002 11:40:09.908633 2500131 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:40:09.908742 2500131 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:40:09.908887 2500131 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:40:09.909018 2500131 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:40:09.909132 2500131 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:40:09.909236 2500131 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:40:09.909349 2500131 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:40:09.909397 2500131 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:40:09.909466 2500131 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:40:09.909478 2500131 kubeadm.go:322] 
	I1002 11:40:09.909542 2500131 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:40:09.909547 2500131 kubeadm.go:322] 
	I1002 11:40:09.909619 2500131 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:40:09.909623 2500131 kubeadm.go:322] 
	I1002 11:40:09.909647 2500131 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:40:09.909702 2500131 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:40:09.909749 2500131 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:40:09.909753 2500131 kubeadm.go:322] 
	I1002 11:40:09.909803 2500131 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:40:09.909807 2500131 kubeadm.go:322] 
	I1002 11:40:09.909851 2500131 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:40:09.909856 2500131 kubeadm.go:322] 
	I1002 11:40:09.909904 2500131 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:40:09.909977 2500131 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:40:09.910041 2500131 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:40:09.910046 2500131 kubeadm.go:322] 
	I1002 11:40:09.910124 2500131 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:40:09.910195 2500131 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:40:09.910199 2500131 kubeadm.go:322] 
	I1002 11:40:09.910278 2500131 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xojxpk.e00p6250ox4qm8uw \
	I1002 11:40:09.910374 2500131 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 \
	I1002 11:40:09.910393 2500131 kubeadm.go:322] 	--control-plane 
	I1002 11:40:09.910397 2500131 kubeadm.go:322] 
	I1002 11:40:09.910477 2500131 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:40:09.910481 2500131 kubeadm.go:322] 
	I1002 11:40:09.910557 2500131 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xojxpk.e00p6250ox4qm8uw \
	I1002 11:40:09.910664 2500131 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 
	I1002 11:40:09.910672 2500131 cni.go:84] Creating CNI manager for ""
	I1002 11:40:09.910679 2500131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:40:09.912740 2500131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 11:40:09.914675 2500131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:40:09.929105 2500131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:40:09.929123 2500131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:40:09.977736 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:40:10.893859 2500131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:40:10.894013 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:10.894126 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=addons-346248 minikube.k8s.io/updated_at=2023_10_02T11_40_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:10.910412 2500131 ops.go:34] apiserver oom_adj: -16
	I1002 11:40:11.012888 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:11.178042 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:11.768550 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:12.268089 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:12.768009 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:13.268009 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:13.768449 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:14.268490 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:14.768009 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:15.267960 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:15.768014 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:16.268969 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:16.768849 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:17.267919 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:17.768916 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:18.268842 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:18.768021 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:19.268861 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:19.768503 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:20.267961 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:20.768379 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:21.267983 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:21.768189 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:22.268460 2500131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:40:22.424837 2500131 kubeadm.go:1081] duration metric: took 11.53086855s to wait for elevateKubeSystemPrivileges.
	I1002 11:40:22.425182 2500131 kubeadm.go:406] StartCluster complete in 29.496316556s
	I1002 11:40:22.425232 2500131 settings.go:142] acquiring lock: {Name:mkcc97fc5770241202468070273c0755324bf4b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:22.425809 2500131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:40:22.426391 2500131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/kubeconfig: {Name:mkf500c5450045c9557e34c3a61a2f3f38c10ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:22.428822 2500131 config.go:182] Loaded profile config "addons-346248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:22.428876 2500131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:40:22.429235 2500131 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1002 11:40:22.429343 2500131 addons.go:69] Setting volumesnapshots=true in profile "addons-346248"
	I1002 11:40:22.429363 2500131 addons.go:231] Setting addon volumesnapshots=true in "addons-346248"
	I1002 11:40:22.429445 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.430264 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.430789 2500131 addons.go:69] Setting cloud-spanner=true in profile "addons-346248"
	I1002 11:40:22.430809 2500131 addons.go:231] Setting addon cloud-spanner=true in "addons-346248"
	I1002 11:40:22.430870 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.431420 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.432873 2500131 addons.go:69] Setting inspektor-gadget=true in profile "addons-346248"
	I1002 11:40:22.432961 2500131 addons.go:231] Setting addon inspektor-gadget=true in "addons-346248"
	I1002 11:40:22.433053 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.434192 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.434681 2500131 addons.go:69] Setting metrics-server=true in profile "addons-346248"
	I1002 11:40:22.434709 2500131 addons.go:231] Setting addon metrics-server=true in "addons-346248"
	I1002 11:40:22.434758 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.435283 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.438337 2500131 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-346248"
	I1002 11:40:22.438416 2500131 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-346248"
	I1002 11:40:22.438460 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.438906 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.439613 2500131 addons.go:69] Setting registry=true in profile "addons-346248"
	I1002 11:40:22.439692 2500131 addons.go:231] Setting addon registry=true in "addons-346248"
	I1002 11:40:22.439831 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.440722 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.457443 2500131 addons.go:69] Setting default-storageclass=true in profile "addons-346248"
	I1002 11:40:22.457571 2500131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-346248"
	I1002 11:40:22.458085 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.458519 2500131 addons.go:69] Setting storage-provisioner=true in profile "addons-346248"
	I1002 11:40:22.458545 2500131 addons.go:231] Setting addon storage-provisioner=true in "addons-346248"
	I1002 11:40:22.458601 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.459275 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.476493 2500131 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-346248"
	I1002 11:40:22.476844 2500131 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-346248"
	I1002 11:40:22.478319 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.476632 2500131 addons.go:69] Setting gcp-auth=true in profile "addons-346248"
	I1002 11:40:22.492797 2500131 mustload.go:65] Loading cluster: addons-346248
	I1002 11:40:22.493052 2500131 config.go:182] Loaded profile config "addons-346248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:22.493376 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.476643 2500131 addons.go:69] Setting ingress=true in profile "addons-346248"
	I1002 11:40:22.507640 2500131 addons.go:231] Setting addon ingress=true in "addons-346248"
	I1002 11:40:22.507733 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.508381 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.476655 2500131 addons.go:69] Setting ingress-dns=true in profile "addons-346248"
	I1002 11:40:22.538902 2500131 addons.go:231] Setting addon ingress-dns=true in "addons-346248"
	I1002 11:40:22.539000 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.539635 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.600291 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 11:40:22.632854 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 11:40:22.636612 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 11:40:22.641018 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 11:40:22.644065 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 11:40:22.646546 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 11:40:22.648796 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 11:40:22.652009 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 11:40:22.652104 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 11:40:22.652236 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.663328 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 11:40:22.665675 2500131 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 11:40:22.669013 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 11:40:22.669104 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 11:40:22.669227 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.678104 2500131 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-346248" context rescaled to 1 replicas
	I1002 11:40:22.678213 2500131 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:40:22.685568 2500131 out.go:177] * Verifying Kubernetes components...
	I1002 11:40:22.693841 2500131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:40:22.690752 2500131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:40:22.732908 2500131 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1002 11:40:22.735266 2500131 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1002 11:40:22.735295 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 11:40:22.735373 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.809785 2500131 out.go:177]   - Using image docker.io/registry:2.8.1
	I1002 11:40:22.811724 2500131 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1002 11:40:22.813908 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1002 11:40:22.813931 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1002 11:40:22.814016 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.816399 2500131 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1002 11:40:22.818468 2500131 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 11:40:22.818491 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1002 11:40:22.818586 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.821874 2500131 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1002 11:40:22.824599 2500131 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:40:22.824622 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:40:22.824717 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.848044 2500131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:40:22.851045 2500131 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:40:22.851104 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:40:22.851190 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:22.858976 2500131 addons.go:231] Setting addon default-storageclass=true in "addons-346248"
	I1002 11:40:22.859023 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.859614 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.848973 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:22.896449 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:22.928594 2500131 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-346248"
	I1002 11:40:22.928647 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.929234 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:22.947759 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:22.996871 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.003744 2500131 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.0
	I1002 11:40:23.007362 2500131 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 11:40:23.011746 2500131 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 11:40:23.014417 2500131 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 11:40:23.014454 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1002 11:40:23.014573 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:23.063252 2500131 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1002 11:40:23.067845 2500131 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 11:40:23.067883 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1002 11:40:23.067986 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:23.171854 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.185640 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.195827 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.228979 2500131 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:40:23.229042 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:40:23.229122 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:23.241084 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.284348 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.298058 2500131 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 11:40:23.301528 2500131 out.go:177]   - Using image docker.io/busybox:stable
	I1002 11:40:23.305444 2500131 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 11:40:23.305470 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 11:40:23.305754 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:23.304788 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.322206 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.364310 2500131 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 11:40:23.364340 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 11:40:23.370016 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:23.504408 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 11:40:23.504470 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 11:40:23.537467 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 11:40:23.589169 2500131 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 11:40:23.589241 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 11:40:23.654058 2500131 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 11:40:23.654090 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 11:40:23.705742 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:40:23.712230 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 11:40:23.712295 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 11:40:23.719986 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1002 11:40:23.720051 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1002 11:40:23.752267 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 11:40:23.779511 2500131 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 11:40:23.779586 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 11:40:23.779958 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 11:40:23.795819 2500131 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 11:40:23.795895 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 11:40:23.813787 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:40:23.833462 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1002 11:40:23.833534 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1002 11:40:23.889550 2500131 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:40:23.889637 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 11:40:23.946976 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 11:40:23.947051 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 11:40:23.959518 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 11:40:23.966758 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 11:40:24.021486 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 11:40:24.021570 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 11:40:24.029329 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1002 11:40:24.029408 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1002 11:40:24.068401 2500131 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:40:24.068478 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:40:24.156791 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 11:40:24.156863 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 11:40:24.224384 2500131 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 11:40:24.224451 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 11:40:24.271261 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1002 11:40:24.271336 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1002 11:40:24.325478 2500131 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 11:40:24.325554 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 11:40:24.328880 2500131 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:40:24.328946 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:40:24.418002 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1002 11:40:24.418079 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1002 11:40:24.466072 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 11:40:24.491127 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 11:40:24.491205 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 11:40:24.516174 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:40:24.541675 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 11:40:24.541795 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1002 11:40:24.746101 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 11:40:24.746178 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 11:40:24.790465 2500131 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 11:40:24.790543 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1002 11:40:24.859473 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 11:40:24.859548 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 11:40:24.931959 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 11:40:24.995928 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 11:40:24.995959 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 11:40:25.102138 2500131 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.408259357s)
	I1002 11:40:25.103252 2500131 node_ready.go:35] waiting up to 6m0s for node "addons-346248" to be "Ready" ...
	I1002 11:40:25.103537 2500131 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.409508072s)
	I1002 11:40:25.103563 2500131 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 11:40:25.224379 2500131 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 11:40:25.224411 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 11:40:25.365090 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 11:40:26.868753 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.331241488s)
	I1002 11:40:27.555071 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:27.899712 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.193895135s)
	I1002 11:40:27.899821 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.147494396s)
	I1002 11:40:28.771253 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.991238411s)
	I1002 11:40:28.771554 2500131 addons.go:467] Verifying addon ingress=true in "addons-346248"
	I1002 11:40:28.771600 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.255342291s)
	I1002 11:40:28.771630 2500131 addons.go:467] Verifying addon metrics-server=true in "addons-346248"
	I1002 11:40:28.773848 2500131 out.go:177] * Verifying ingress addon...
	I1002 11:40:28.771705 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.839699144s)
	I1002 11:40:28.771376 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.811789126s)
	I1002 11:40:28.771436 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.804602262s)
	I1002 11:40:28.771531 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.305369676s)
	I1002 11:40:28.771347 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.957484069s)
	I1002 11:40:28.776218 2500131 addons.go:467] Verifying addon registry=true in "addons-346248"
	I1002 11:40:28.778242 2500131 out.go:177] * Verifying registry addon...
	W1002 11:40:28.776761 2500131 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 11:40:28.777556 2500131 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 11:40:28.780898 2500131 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 11:40:28.781052 2500131 retry.go:31] will retry after 317.234468ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 11:40:28.792123 2500131 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 11:40:28.792150 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:28.797330 2500131 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 11:40:28.797407 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 11:40:28.800485 2500131 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 11:40:28.801371 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:28.806229 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:29.098818 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 11:40:29.108842 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.743692739s)
	I1002 11:40:29.108880 2500131 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-346248"
	I1002 11:40:29.111278 2500131 out.go:177] * Verifying csi-hostpath-driver addon...
	I1002 11:40:29.114090 2500131 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 11:40:29.132042 2500131 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 11:40:29.132069 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:29.161528 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:29.306262 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:29.310537 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:29.690226 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:29.758145 2500131 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 11:40:29.758299 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:29.778213 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:29.805691 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:29.810304 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:29.893473 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:30.012347 2500131 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 11:40:30.127377 2500131 addons.go:231] Setting addon gcp-auth=true in "addons-346248"
	I1002 11:40:30.127489 2500131 host.go:66] Checking if "addons-346248" exists ...
	I1002 11:40:30.128109 2500131 cli_runner.go:164] Run: docker container inspect addons-346248 --format={{.State.Status}}
	I1002 11:40:30.164914 2500131 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 11:40:30.164994 2500131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-346248
	I1002 11:40:30.172295 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:30.207282 2500131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35872 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/addons-346248/id_rsa Username:docker}
	I1002 11:40:30.316312 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:30.330706 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:30.577334 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.478466832s)
	I1002 11:40:30.580095 2500131 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 11:40:30.582441 2500131 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1002 11:40:30.584450 2500131 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 11:40:30.584476 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 11:40:30.654564 2500131 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 11:40:30.654643 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 11:40:30.669055 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:30.709739 2500131 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 11:40:30.709844 2500131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1002 11:40:30.791728 2500131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 11:40:30.807394 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:30.812286 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:31.169289 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:31.306194 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:31.311020 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:31.667199 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:31.821795 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:31.823106 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:31.906113 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:32.149224 2500131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.357456413s)
	I1002 11:40:32.150947 2500131 addons.go:467] Verifying addon gcp-auth=true in "addons-346248"
	I1002 11:40:32.154379 2500131 out.go:177] * Verifying gcp-auth addon...
	I1002 11:40:32.157152 2500131 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 11:40:32.226402 2500131 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 11:40:32.226427 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:32.227396 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:32.233043 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:32.308379 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:32.314562 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:32.667598 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:32.738173 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:32.808465 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:32.831104 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:33.167597 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:33.239274 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:33.306307 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:33.311074 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:33.666998 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:33.737623 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:33.806639 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:33.841249 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:33.912162 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:34.167204 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:34.236936 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:34.306372 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:34.310902 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:34.667609 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:34.737995 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:34.806643 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:34.810962 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:35.167914 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:35.237253 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:35.307569 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:35.312038 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:35.670127 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:35.737640 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:35.812057 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:35.813675 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:36.167189 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:36.243134 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:36.317424 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:36.321931 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:36.393405 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:36.666695 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:36.737478 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:36.805922 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:36.810962 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:37.166219 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:37.237220 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:37.306709 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:37.310862 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:37.666628 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:37.737173 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:37.806658 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:37.810775 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:38.166825 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:38.237610 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:38.306485 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:38.310614 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:38.393869 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:38.666500 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:38.737017 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:38.805842 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:38.810831 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:39.166060 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:39.236791 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:39.305880 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:39.310809 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:39.665984 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:39.736956 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:39.805904 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:39.811062 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:40.165917 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:40.237522 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:40.306690 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:40.310715 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:40.666683 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:40.736897 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:40.805593 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:40.810825 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:40.893981 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:41.166667 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:41.236873 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:41.306234 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:41.310232 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:41.666954 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:41.736826 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:41.806516 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:41.814277 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:42.167100 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:42.237122 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:42.306449 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:42.310601 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:42.666650 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:42.737463 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:42.806267 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:42.810160 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:43.166855 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:43.238802 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:43.306214 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:43.311426 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:43.393629 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:43.666405 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:43.736895 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:43.806080 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:43.811318 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:44.165834 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:44.237657 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:44.305979 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:44.310842 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:44.666805 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:44.737101 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:44.806838 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:44.811146 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:45.169251 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:45.239911 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:45.308621 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:45.315298 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:45.666573 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:45.736288 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:45.805848 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:45.810915 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:45.893083 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:46.166426 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:46.237233 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:46.306463 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:46.310730 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:46.666147 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:46.737115 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:46.806957 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:46.810821 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:47.166015 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:47.237291 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:47.306117 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:47.311408 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:47.666291 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:47.737170 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:47.806543 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:47.810220 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:47.893764 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:48.166908 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:48.236984 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:48.306284 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:48.311191 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:48.666332 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:48.737290 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:48.806174 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:48.810366 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:49.166761 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:49.236873 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:49.305759 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:49.310932 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:49.666271 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:49.737386 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:49.805586 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:49.810475 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:49.893878 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:50.170450 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:50.237613 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:50.306497 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:50.310167 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:50.666315 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:50.737668 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:50.805855 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:50.810990 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:51.169509 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:51.237038 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:51.306166 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:51.310932 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:51.667043 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:51.736874 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:51.805854 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:51.810758 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:51.893938 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:52.166208 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:52.237276 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:52.305357 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:52.310287 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:52.666385 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:52.737377 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:52.805594 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:52.810557 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:53.166808 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:53.238284 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:53.305441 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:53.311743 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:53.666121 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:53.737181 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:53.806531 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:53.810720 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:54.166254 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:54.236779 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:54.306227 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:54.310052 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:54.393370 2500131 node_ready.go:58] node "addons-346248" has status "Ready":"False"
	I1002 11:40:54.665975 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:54.736515 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:54.806022 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:54.810011 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:55.167102 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:55.237735 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:55.306574 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:55.310551 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:55.666080 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:55.736825 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:55.806213 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:55.810150 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:56.173504 2500131 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 11:40:56.173579 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:56.239375 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:56.321632 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:56.326170 2500131 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 11:40:56.326253 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:56.475977 2500131 node_ready.go:49] node "addons-346248" has status "Ready":"True"
	I1002 11:40:56.476042 2500131 node_ready.go:38] duration metric: took 31.372752022s waiting for node "addons-346248" to be "Ready" ...
	I1002 11:40:56.476070 2500131 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:56.494529 2500131 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ts974" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:56.677477 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:56.737801 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:56.822438 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:56.824013 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:57.167653 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:57.241435 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:57.307460 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:57.312948 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:57.670216 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:57.736908 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:57.812368 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:57.814760 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:58.167846 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:58.237732 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:58.317895 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:58.319798 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:58.525968 2500131 pod_ready.go:92] pod "coredns-5dd5756b68-ts974" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:58.525994 2500131 pod_ready.go:81] duration metric: took 2.031381615s waiting for pod "coredns-5dd5756b68-ts974" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.526018 2500131 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.532867 2500131 pod_ready.go:92] pod "etcd-addons-346248" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:58.532894 2500131 pod_ready.go:81] duration metric: took 6.868529ms waiting for pod "etcd-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.532909 2500131 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.540311 2500131 pod_ready.go:92] pod "kube-apiserver-addons-346248" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:58.540344 2500131 pod_ready.go:81] duration metric: took 7.419232ms waiting for pod "kube-apiserver-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.540358 2500131 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.546716 2500131 pod_ready.go:92] pod "kube-controller-manager-addons-346248" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:58.546738 2500131 pod_ready.go:81] duration metric: took 6.372216ms waiting for pod "kube-controller-manager-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.546754 2500131 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tgtnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.669666 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:58.737377 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:58.795123 2500131 pod_ready.go:92] pod "kube-proxy-tgtnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:58.795148 2500131 pod_ready.go:81] duration metric: took 248.386484ms waiting for pod "kube-proxy-tgtnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.795161 2500131 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:58.806929 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:58.812423 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:59.167506 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:59.194004 2500131 pod_ready.go:92] pod "kube-scheduler-addons-346248" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:59.194037 2500131 pod_ready.go:81] duration metric: took 398.838585ms waiting for pod "kube-scheduler-addons-346248" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:59.194050 2500131 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:59.236785 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:59.308894 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:59.314352 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:40:59.669879 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:40:59.737774 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:40:59.807119 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:40:59.817984 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:00.172649 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:00.238385 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:00.306989 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:00.313797 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:00.668779 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:00.737087 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:00.806024 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:00.811329 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:01.168579 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:01.242361 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:01.306205 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:01.311061 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:01.501745 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:01.667733 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:01.737551 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:01.805968 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:01.811746 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:02.168046 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:02.242890 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:02.307492 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:02.313811 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:02.667208 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:02.736926 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:02.806363 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:02.811429 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:03.167209 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:03.237337 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:03.306315 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:03.311569 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:03.502957 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:03.667736 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:03.737543 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:03.806059 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:03.812856 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:04.167698 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:04.238902 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:04.307458 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:04.312823 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:04.669989 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:04.738105 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:04.807402 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:04.813465 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:05.173259 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:05.241192 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:05.309387 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:05.313566 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:05.510137 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:05.670352 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:05.739083 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:05.809612 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:05.838479 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:06.185141 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:06.241405 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:06.307512 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:06.318051 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:06.671482 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:06.740120 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:06.815554 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:06.820626 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:07.170560 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:07.246496 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:07.310803 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:07.319507 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:07.669603 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:07.738009 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:07.807507 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:07.817778 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:08.004224 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:08.169315 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:08.236863 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:08.307028 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:08.312282 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:08.669326 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:08.736889 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:08.806785 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:08.812434 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:09.167843 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:09.237676 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:09.307039 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:09.312133 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:09.667604 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:09.737354 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:09.805861 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:09.811323 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:10.007088 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:10.168279 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:10.237241 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:10.322199 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:10.322841 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:10.670864 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:10.736668 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:10.812448 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:10.817007 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:11.168931 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:11.237552 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:11.314253 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:11.327865 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:11.667750 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:11.737915 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:11.809614 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:11.814439 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:12.167792 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:12.237569 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:12.320896 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:12.326595 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:12.505413 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:12.668187 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:12.736853 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:12.806379 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:12.810782 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:13.168176 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:13.245402 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:13.307208 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:13.327885 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:13.671563 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:13.737409 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:13.806701 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:13.811188 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:14.170833 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:14.250217 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:14.307782 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:14.317950 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:14.513182 2500131 pod_ready.go:102] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"False"
	I1002 11:41:14.668805 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:14.737643 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:14.806870 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:14.819275 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:15.172843 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:15.237876 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:15.307792 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:15.313003 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:15.668403 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:15.737674 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:15.814871 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:15.821602 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:16.169025 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:16.241039 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:16.307165 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:16.312364 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:16.681458 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:16.738513 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:16.806465 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:16.811534 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:17.004352 2500131 pod_ready.go:92] pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace has status "Ready":"True"
	I1002 11:41:17.004387 2500131 pod_ready.go:81] duration metric: took 17.810329108s waiting for pod "metrics-server-7c66d45ddc-grq99" in "kube-system" namespace to be "Ready" ...
	I1002 11:41:17.004414 2500131 pod_ready.go:38] duration metric: took 20.528317438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:41:17.004441 2500131 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:41:17.004513 2500131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:41:17.060053 2500131 api_server.go:72] duration metric: took 54.381744501s to wait for apiserver process to appear ...
	I1002 11:41:17.060082 2500131 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:41:17.060101 2500131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 11:41:17.076908 2500131 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 11:41:17.078693 2500131 api_server.go:141] control plane version: v1.28.2
	I1002 11:41:17.078732 2500131 api_server.go:131] duration metric: took 18.642006ms to wait for apiserver health ...
	I1002 11:41:17.078741 2500131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:41:17.096317 2500131 system_pods.go:59] 17 kube-system pods found
	I1002 11:41:17.096364 2500131 system_pods.go:61] "coredns-5dd5756b68-ts974" [f51a95e4-d6ef-429c-8e5e-3832162005b4] Running
	I1002 11:41:17.096376 2500131 system_pods.go:61] "csi-hostpath-attacher-0" [f7c424f9-4978-48a8-be38-443c07bf6f6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 11:41:17.096385 2500131 system_pods.go:61] "csi-hostpath-resizer-0" [bd4a04f7-2923-4168-afab-2d13c4686c5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 11:41:17.096405 2500131 system_pods.go:61] "csi-hostpathplugin-xmjv9" [ce8f6a02-df5f-4980-baf6-46bdcb297973] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 11:41:17.096419 2500131 system_pods.go:61] "etcd-addons-346248" [c050a4cf-dd59-45df-a8f5-534d57f3ffee] Running
	I1002 11:41:17.096424 2500131 system_pods.go:61] "kindnet-pwj89" [9eea767b-7975-404a-8151-0bd2be7f0128] Running
	I1002 11:41:17.096433 2500131 system_pods.go:61] "kube-apiserver-addons-346248" [09685920-926e-4b70-a83c-8345460d6c5f] Running
	I1002 11:41:17.096441 2500131 system_pods.go:61] "kube-controller-manager-addons-346248" [e5a364bd-a172-4579-8916-8ec90c25a3f0] Running
	I1002 11:41:17.096449 2500131 system_pods.go:61] "kube-ingress-dns-minikube" [61ac6b36-7371-4a90-a337-92f82d4ed43b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 11:41:17.096457 2500131 system_pods.go:61] "kube-proxy-tgtnc" [e2f30c49-eadb-4e7d-b06c-59ad97d644bb] Running
	I1002 11:41:17.096462 2500131 system_pods.go:61] "kube-scheduler-addons-346248" [a9cce9be-24a1-4950-8ccb-9981b65a4768] Running
	I1002 11:41:17.096468 2500131 system_pods.go:61] "metrics-server-7c66d45ddc-grq99" [9fe2087a-0c7f-4aa1-a866-60cbff2676c3] Running
	I1002 11:41:17.096477 2500131 system_pods.go:61] "registry-proxy-wxxk2" [51f22832-d869-487a-baa6-1753d0735683] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 11:41:17.096484 2500131 system_pods.go:61] "registry-q9gtk" [f5a09aa6-1c6f-488e-88ee-7656c207927e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 11:41:17.096498 2500131 system_pods.go:61] "snapshot-controller-58dbcc7b99-fgg55" [caef0715-3c94-43c6-b3af-111657b0ae4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 11:41:17.096510 2500131 system_pods.go:61] "snapshot-controller-58dbcc7b99-hbhfd" [56955cb6-fc75-4d05-a712-24e43a0fe970] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 11:41:17.096547 2500131 system_pods.go:61] "storage-provisioner" [bf5bf2df-6736-4e3f-b735-67601e67a2cf] Running
	I1002 11:41:17.096555 2500131 system_pods.go:74] duration metric: took 17.804278ms to wait for pod list to return data ...
	I1002 11:41:17.096564 2500131 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:41:17.099249 2500131 default_sa.go:45] found service account: "default"
	I1002 11:41:17.099280 2500131 default_sa.go:55] duration metric: took 2.70964ms for default service account to be created ...
	I1002 11:41:17.099291 2500131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:41:17.119949 2500131 system_pods.go:86] 17 kube-system pods found
	I1002 11:41:17.119982 2500131 system_pods.go:89] "coredns-5dd5756b68-ts974" [f51a95e4-d6ef-429c-8e5e-3832162005b4] Running
	I1002 11:41:17.119993 2500131 system_pods.go:89] "csi-hostpath-attacher-0" [f7c424f9-4978-48a8-be38-443c07bf6f6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 11:41:17.120002 2500131 system_pods.go:89] "csi-hostpath-resizer-0" [bd4a04f7-2923-4168-afab-2d13c4686c5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 11:41:17.120012 2500131 system_pods.go:89] "csi-hostpathplugin-xmjv9" [ce8f6a02-df5f-4980-baf6-46bdcb297973] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 11:41:17.120019 2500131 system_pods.go:89] "etcd-addons-346248" [c050a4cf-dd59-45df-a8f5-534d57f3ffee] Running
	I1002 11:41:17.120025 2500131 system_pods.go:89] "kindnet-pwj89" [9eea767b-7975-404a-8151-0bd2be7f0128] Running
	I1002 11:41:17.120036 2500131 system_pods.go:89] "kube-apiserver-addons-346248" [09685920-926e-4b70-a83c-8345460d6c5f] Running
	I1002 11:41:17.120042 2500131 system_pods.go:89] "kube-controller-manager-addons-346248" [e5a364bd-a172-4579-8916-8ec90c25a3f0] Running
	I1002 11:41:17.120056 2500131 system_pods.go:89] "kube-ingress-dns-minikube" [61ac6b36-7371-4a90-a337-92f82d4ed43b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 11:41:17.120061 2500131 system_pods.go:89] "kube-proxy-tgtnc" [e2f30c49-eadb-4e7d-b06c-59ad97d644bb] Running
	I1002 11:41:17.120067 2500131 system_pods.go:89] "kube-scheduler-addons-346248" [a9cce9be-24a1-4950-8ccb-9981b65a4768] Running
	I1002 11:41:17.120079 2500131 system_pods.go:89] "metrics-server-7c66d45ddc-grq99" [9fe2087a-0c7f-4aa1-a866-60cbff2676c3] Running
	I1002 11:41:17.120086 2500131 system_pods.go:89] "registry-proxy-wxxk2" [51f22832-d869-487a-baa6-1753d0735683] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 11:41:17.120095 2500131 system_pods.go:89] "registry-q9gtk" [f5a09aa6-1c6f-488e-88ee-7656c207927e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 11:41:17.120108 2500131 system_pods.go:89] "snapshot-controller-58dbcc7b99-fgg55" [caef0715-3c94-43c6-b3af-111657b0ae4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 11:41:17.120116 2500131 system_pods.go:89] "snapshot-controller-58dbcc7b99-hbhfd" [56955cb6-fc75-4d05-a712-24e43a0fe970] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 11:41:17.120122 2500131 system_pods.go:89] "storage-provisioner" [bf5bf2df-6736-4e3f-b735-67601e67a2cf] Running
	I1002 11:41:17.120134 2500131 system_pods.go:126] duration metric: took 20.837454ms to wait for k8s-apps to be running ...
	I1002 11:41:17.120149 2500131 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:41:17.120216 2500131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:41:17.174735 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:17.180818 2500131 system_svc.go:56] duration metric: took 60.660136ms WaitForService to wait for kubelet.
	I1002 11:41:17.180908 2500131 kubeadm.go:581] duration metric: took 54.502612292s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:41:17.180991 2500131 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:41:17.185202 2500131 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 11:41:17.185240 2500131 node_conditions.go:123] node cpu capacity is 2
	I1002 11:41:17.185253 2500131 node_conditions.go:105] duration metric: took 4.225621ms to run NodePressure ...
	I1002 11:41:17.185266 2500131 start.go:228] waiting for startup goroutines ...
	I1002 11:41:17.238189 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:17.307397 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:17.317717 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:17.668497 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:17.737450 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:17.817161 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:17.821212 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:18.167724 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:18.237669 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:18.307013 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:18.313197 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:18.667056 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:18.738612 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:18.808698 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:18.811933 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:19.168439 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:19.238053 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:19.310229 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:19.311791 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:19.668293 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:19.737086 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:19.811143 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:19.815347 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:20.167882 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:20.237686 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:20.306100 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:20.311414 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:20.667992 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:20.737885 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:20.806057 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:20.811832 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:21.167967 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:21.237580 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:21.306906 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:21.311106 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:21.667394 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:21.751908 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:21.806281 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:21.811307 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:22.168135 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:22.237539 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:22.307614 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:22.317305 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:22.668851 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:22.738943 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:22.806811 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:22.812312 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:23.173481 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:23.238164 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:23.307787 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:23.312604 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:23.671025 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:23.737422 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:23.806479 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:23.811505 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:24.168423 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:24.237866 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:24.306446 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:24.310817 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:24.668133 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:24.737789 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:24.806485 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:24.810718 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:25.169265 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:25.237105 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:25.306274 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:25.310586 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:25.667760 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:25.742161 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:25.807157 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:25.811721 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:26.168488 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:26.237321 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:26.306654 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:26.311384 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:26.668392 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:26.739168 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:26.806669 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:26.811183 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:27.171041 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:27.245046 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:27.306217 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:27.312939 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:27.668110 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:27.737614 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:27.806752 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:27.811193 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:28.171884 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:28.240550 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:28.308225 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:28.312217 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:28.667492 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:28.736882 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:28.806742 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:28.812652 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:29.167307 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:29.236962 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:29.305946 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:29.311672 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:29.668678 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:29.739041 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:29.806500 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:29.810921 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:30.169672 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:30.237625 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:30.306920 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:30.311712 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:30.668308 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:30.739548 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:30.807131 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:30.814193 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:31.167554 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:31.237700 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:31.309009 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:31.313039 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:31.668784 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:31.738070 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:31.807250 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:31.814864 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:32.173721 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:32.238223 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:32.309441 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:32.312980 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:32.680247 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:32.736741 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:32.807353 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:32.816745 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:33.169174 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:33.239661 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:33.306126 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:33.313856 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:33.669080 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:33.737379 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:33.806951 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:33.811157 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:34.169031 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:34.237705 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:34.309304 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:34.319704 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 11:41:34.668176 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:34.737415 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:34.806972 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:34.817821 2500131 kapi.go:107] duration metric: took 1m6.036920182s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 11:41:35.168160 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:35.237843 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:35.307150 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:35.670609 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:35.737963 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:35.806726 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:36.168272 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:36.238159 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:36.306919 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:36.668007 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:36.737715 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:36.806808 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:37.167455 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:37.240968 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:37.306812 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:37.671312 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:37.737282 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:37.807106 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:38.168209 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:38.241869 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:38.305944 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:38.669013 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:38.737851 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:38.807632 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:39.186571 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:39.237946 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:39.310696 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:39.668348 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:39.737225 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:39.807681 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:40.171663 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:40.237852 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:40.306989 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:40.672405 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:40.738126 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:40.807424 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:41.168811 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:41.238956 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:41.308081 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:41.669384 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:41.737757 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:41.806247 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:42.172682 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:42.249296 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:42.307981 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:42.676709 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:42.737252 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:42.807427 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:43.168139 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:43.265127 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:43.306739 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:43.692172 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:43.737552 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:43.806755 2500131 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 11:41:44.169189 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:44.237599 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:44.306440 2500131 kapi.go:107] duration metric: took 1m15.528878772s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 11:41:44.668304 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:44.737397 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:45.169338 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:45.238054 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:45.667794 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:45.739683 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:46.169112 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:46.238117 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:46.669518 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:46.740612 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 11:41:47.167000 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:47.237719 2500131 kapi.go:107] duration metric: took 1m15.080563935s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 11:41:47.239760 2500131 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-346248 cluster.
	I1002 11:41:47.241473 2500131 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 11:41:47.243663 2500131 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 11:41:47.667317 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:48.167696 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:48.672014 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:49.167988 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:49.666906 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:50.168023 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:50.667772 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:51.167901 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:51.667866 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:52.170663 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:52.668350 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:53.169641 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:53.667746 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:54.167554 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:54.668643 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:55.171205 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:55.667649 2500131 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 11:41:56.168085 2500131 kapi.go:107] duration metric: took 1m27.053991232s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 11:41:56.169996 2500131 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 11:41:56.171922 2500131 addons.go:502] enable addons completed in 1m33.742696652s: enabled=[cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 11:41:56.171975 2500131 start.go:233] waiting for cluster config update ...
	I1002 11:41:56.171994 2500131 start.go:242] writing updated cluster config ...
	I1002 11:41:56.172303 2500131 ssh_runner.go:195] Run: rm -f paused
	I1002 11:41:56.236851 2500131 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:41:56.239043 2500131 out.go:177] * Done! kubectl is now configured to use "addons-346248" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.846871705Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=d19540a3-69e3-4cd6-81a6-d821fccd550e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.847077366Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d19540a3-69e3-4cd6-81a6-d821fccd550e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.848498964Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=6b81d8aa-a231-4183-bb72-ecb555fd8fdb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.848708022Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6b81d8aa-a231-4183-bb72-ecb555fd8fdb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.849486805Z" level=info msg="Creating container: default/hello-world-app-5d77478584-xbvmr/hello-world-app" id=d59ab50c-4d61-494e-a4c6-530cb3c91ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.849588114Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 11:44:55 addons-346248 conmon[4788]: conmon 91efdd16f6486e9c3121 <ninfo>: container 4799 exited with status 137
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.945390254Z" level=info msg="Created container d2f8c28463566d1cd3de77e2a967b176329c89cd2683b90755273f74b256278a: default/hello-world-app-5d77478584-xbvmr/hello-world-app" id=d59ab50c-4d61-494e-a4c6-530cb3c91ee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.948714992Z" level=info msg="Starting container: d2f8c28463566d1cd3de77e2a967b176329c89cd2683b90755273f74b256278a" id=6f037de0-8ec2-41f1-a381-a8d40fee23e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 11:44:55 addons-346248 conmon[8106]: conmon d2f8c28463566d1cd3de <ninfo>: container 8117 exited with status 1
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.967717695Z" level=info msg="Started container" PID=8117 containerID=d2f8c28463566d1cd3de77e2a967b176329c89cd2683b90755273f74b256278a description=default/hello-world-app-5d77478584-xbvmr/hello-world-app id=6f037de0-8ec2-41f1-a381-a8d40fee23e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=904e33b9c8a4624a03bbffc1a32ade384e663ad74f2c5aa7d29053e7a25975cb
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.968298406Z" level=info msg="Stopped container 91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110: ingress-nginx/ingress-nginx-controller-f6b66b4b9-zw4n6/controller" id=d9c14519-e1a5-4a3f-889a-0454d200b38c name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.968867826Z" level=info msg="Stopping pod sandbox: 8128075a1267a6a5af3a86a9028ba5e78999a682713665531b6c91f00383438b" id=0ab8f073-7f58-47a2-a63b-136f2a507031 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.979025794Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-BANWGEMK76QREZW4 - [0:0]\n:KUBE-HP-L22QOZFHZW3HUXX7 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-BANWGEMK76QREZW4\n-X KUBE-HP-L22QOZFHZW3HUXX7\nCOMMIT\n"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.981818010Z" level=info msg="Closing host port tcp:80"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.981865313Z" level=info msg="Closing host port tcp:443"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.983577225Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.983616470Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.983786759Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-f6b66b4b9-zw4n6 Namespace:ingress-nginx ID:8128075a1267a6a5af3a86a9028ba5e78999a682713665531b6c91f00383438b UID:8c0d4f6b-06d8-4a01-ac73-303cceafad58 NetNS:/var/run/netns/52df6916-e34a-442c-b817-934cb62c98ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 11:44:55 addons-346248 crio[898]: time="2023-10-02 11:44:55.983933746Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-f6b66b4b9-zw4n6 from CNI network \"kindnet\" (type=ptp)"
	Oct 02 11:44:56 addons-346248 crio[898]: time="2023-10-02 11:44:56.018458904Z" level=info msg="Stopped pod sandbox: 8128075a1267a6a5af3a86a9028ba5e78999a682713665531b6c91f00383438b" id=0ab8f073-7f58-47a2-a63b-136f2a507031 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 11:44:56 addons-346248 crio[898]: time="2023-10-02 11:44:56.047261233Z" level=info msg="Removing container: 3087bcb1df3ab98a244ae3d74a4e0ebc232d8850dd0426eebbd1a8f54e457d5f" id=99e17430-d6fb-46c9-8374-8129fc33383b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 11:44:56 addons-346248 crio[898]: time="2023-10-02 11:44:56.074627425Z" level=info msg="Removed container 3087bcb1df3ab98a244ae3d74a4e0ebc232d8850dd0426eebbd1a8f54e457d5f: default/hello-world-app-5d77478584-xbvmr/hello-world-app" id=99e17430-d6fb-46c9-8374-8129fc33383b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 11:44:56 addons-346248 crio[898]: time="2023-10-02 11:44:56.076106491Z" level=info msg="Removing container: 91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110" id=8778282f-d629-4934-bd90-04ce4d177df9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 11:44:56 addons-346248 crio[898]: time="2023-10-02 11:44:56.116914963Z" level=info msg="Removed container 91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110: ingress-nginx/ingress-nginx-controller-f6b66b4b9-zw4n6/controller" id=8778282f-d629-4934-bd90-04ce4d177df9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2f8c28463566       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                             5 seconds ago       Exited              hello-world-app           2                   904e33b9c8a46       hello-world-app-5d77478584-xbvmr
	682d37340600b       ghcr.io/headlamp-k8s/headlamp@sha256:44b17c125fc5da7899f2583ca3468a31cc80ea52c9ef2aad503f58d91908e4c1                        2 minutes ago       Running             headlamp                  0                   b776bda90e9ec       headlamp-58b88cff49-m56g6
	62d2a1d632673       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                              2 minutes ago       Running             nginx                     0                   5b30b9276d08a       nginx
	a06bbbfd0155d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   dcdfed2978f61       gcp-auth-d4c87556c-nxbvp
	9de30c4f5a209       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              patch                     0                   05df09baf99ef       ingress-nginx-admission-patch-2rbjw
	dd1747da4b8bb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              create                    0                   c2c29d17ca48d       ingress-nginx-admission-create-bdjnl
	d45f64530c779       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago       Running             local-path-provisioner    0                   ef787bd89a63b       local-path-provisioner-78b46b4d5c-qzz9q
	56eb09d48eafb       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   6b87d0768df10       coredns-5dd5756b68-ts974
	fd2ba1092921a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   0b8197865ca1a       storage-provisioner
	a327c277e193c       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                             4 minutes ago       Running             kube-proxy                0                   98beb942966b6       kube-proxy-tgtnc
	915b3a613a536       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             4 minutes ago       Running             kindnet-cni               0                   0741950dcc0dc       kindnet-pwj89
	675297023e01b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             4 minutes ago       Running             etcd                      0                   86b3bc3811884       etcd-addons-346248
	15ce2eaf7045d       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                             4 minutes ago       Running             kube-apiserver            0                   9ebb72365e311       kube-apiserver-addons-346248
	ea1acecd5d212       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                             4 minutes ago       Running             kube-scheduler            0                   5da02e160a435       kube-scheduler-addons-346248
	ddcdd0a6a40fe       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                             4 minutes ago       Running             kube-controller-manager   0                   0b82d71502736       kube-controller-manager-addons-346248
	
	* 
	* ==> coredns [56eb09d48eafbe96aa246164c0d13cc9f8e77104883b484906c555051721a679] <==
	* [INFO] 10.244.0.17:49060 - 4904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000165375s
	[INFO] 10.244.0.17:49060 - 4959 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00207238s
	[INFO] 10.244.0.17:36198 - 15362 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002159641s
	[INFO] 10.244.0.17:49060 - 41773 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001580679s
	[INFO] 10.244.0.17:36198 - 63032 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001246286s
	[INFO] 10.244.0.17:49060 - 31997 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190892s
	[INFO] 10.244.0.17:36198 - 14184 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000306544s
	[INFO] 10.244.0.17:38673 - 5968 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042888s
	[INFO] 10.244.0.17:59407 - 16203 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000283249s
	[INFO] 10.244.0.17:59407 - 2648 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000183352s
	[INFO] 10.244.0.17:38673 - 50585 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000236685s
	[INFO] 10.244.0.17:59407 - 53130 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000126728s
	[INFO] 10.244.0.17:38673 - 28871 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000155856s
	[INFO] 10.244.0.17:59407 - 31131 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076324s
	[INFO] 10.244.0.17:38673 - 62524 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039811s
	[INFO] 10.244.0.17:59407 - 51778 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078023s
	[INFO] 10.244.0.17:38673 - 60647 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062745s
	[INFO] 10.244.0.17:38673 - 39499 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062499s
	[INFO] 10.244.0.17:59407 - 61616 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073862s
	[INFO] 10.244.0.17:59407 - 33543 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00176472s
	[INFO] 10.244.0.17:38673 - 2829 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002145758s
	[INFO] 10.244.0.17:59407 - 20974 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001049396s
	[INFO] 10.244.0.17:59407 - 61089 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055524s
	[INFO] 10.244.0.17:38673 - 44250 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000869367s
	[INFO] 10.244.0.17:38673 - 57374 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075578s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-346248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-346248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=addons-346248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_40_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-346248
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:40:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-346248
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:44:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:44:45 +0000   Mon, 02 Oct 2023 11:40:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:44:45 +0000   Mon, 02 Oct 2023 11:40:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:44:45 +0000   Mon, 02 Oct 2023 11:40:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:44:45 +0000   Mon, 02 Oct 2023 11:40:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-346248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3a2ad06c0c480ea0722566a0b49af5
	  System UUID:                83cb4657-bcbc-4202-9a55-dd921f81599d
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-xbvmr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-d4c87556c-nxbvp                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  headlamp                    headlamp-58b88cff49-m56g6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 coredns-5dd5756b68-ts974                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m39s
	  kube-system                 etcd-addons-346248                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-pwj89                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m39s
	  kube-system                 kube-apiserver-addons-346248               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-addons-346248      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-tgtnc                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-addons-346248               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  local-path-storage          local-path-provisioner-78b46b4d5c-qzz9q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m33s            kube-proxy       
	  Normal  Starting                 5m               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node addons-346248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node addons-346248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x8 over 5m)  kubelet          Node addons-346248 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m52s            kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s            kubelet          Node addons-346248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s            kubelet          Node addons-346248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s            kubelet          Node addons-346248 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s            node-controller  Node addons-346248 event: Registered Node addons-346248 in Controller
	  Normal  NodeReady                4m6s             kubelet          Node addons-346248 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000706] FS-Cache: N-cookie c=000000ae [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000bb1048f9
	[  +0.001093] FS-Cache: N-key=[8] '0c485c0100000000'
	[  +0.003006] FS-Cache: Duplicate cookie detected
	[  +0.001024] FS-Cache: O-cookie c=000000a8 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.001010] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=0000000013ece049
	[  +0.001055] FS-Cache: O-key=[8] '0c485c0100000000'
	[  +0.000763] FS-Cache: N-cookie c=000000af [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=000000000501d355
	[  +0.001059] FS-Cache: N-key=[8] '0c485c0100000000'
	[  +2.569349] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=000000a6 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000f66d06b9
	[  +0.001050] FS-Cache: O-key=[8] '0b485c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=000000b1 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000bb1048f9
	[  +0.001056] FS-Cache: N-key=[8] '0b485c0100000000'
	[  +0.380763] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=000000ab [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000e33aca47
	[  +0.001103] FS-Cache: O-key=[8] '11485c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=000000b2 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000389fe983
	[  +0.001060] FS-Cache: N-key=[8] '11485c0100000000'
	[ +28.749314] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [675297023e01b054a7e0febd38f137b69e86303d55d8b6bfec70464f3fc78f18] <==
	* {"level":"info","ts":"2023-10-02T11:40:02.640897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:02.64093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:02.644229Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-346248 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:40:02.644318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:02.645328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:40:02.645479Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:02.645723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:02.646803Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:02.646937Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:02.646998Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:02.647151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:40:02.647189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T11:40:02.663666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2023-10-02T11:40:23.550155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.0585ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024191701591953 > lease_revoke:<id:70cc8af03027a615>","response":"size:29"}
	{"level":"info","ts":"2023-10-02T11:40:25.010584Z","caller":"traceutil/trace.go:171","msg":"trace[1713014421] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"113.692003ms","start":"2023-10-02T11:40:24.896869Z","end":"2023-10-02T11:40:25.010561Z","steps":["trace[1713014421] 'process raft request'  (duration: 106.818601ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:40:26.453339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.711085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024191701591994 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" mod_revision:320 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" value_size:141 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-02T11:40:26.455384Z","caller":"traceutil/trace.go:171","msg":"trace[1714942200] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"145.344657ms","start":"2023-10-02T11:40:26.310018Z","end":"2023-10-02T11:40:26.455363Z","steps":["trace[1714942200] 'process raft request'  (duration: 32.287764ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:40:26.455889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.966492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-02T11:40:26.455934Z","caller":"traceutil/trace.go:171","msg":"trace[1246896456] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:421; }","duration":"147.024388ms","start":"2023-10-02T11:40:26.308901Z","end":"2023-10-02T11:40:26.455926Z","steps":["trace[1246896456] 'agreement among raft nodes before linearized reading'  (duration: 146.919691ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:40:26.456141Z","caller":"traceutil/trace.go:171","msg":"trace[681296249] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"145.881913ms","start":"2023-10-02T11:40:26.310251Z","end":"2023-10-02T11:40:26.456133Z","steps":["trace[681296249] 'process raft request'  (duration: 143.169919ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:40:26.456265Z","caller":"traceutil/trace.go:171","msg":"trace[530510778] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"145.076325ms","start":"2023-10-02T11:40:26.311182Z","end":"2023-10-02T11:40:26.456259Z","steps":["trace[530510778] 'process raft request'  (duration: 144.594995ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:40:26.456385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.470763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-02T11:40:26.456412Z","caller":"traceutil/trace.go:171","msg":"trace[1534148602] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:421; }","duration":"146.503928ms","start":"2023-10-02T11:40:26.309902Z","end":"2023-10-02T11:40:26.456406Z","steps":["trace[1534148602] 'agreement among raft nodes before linearized reading'  (duration: 146.44591ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:40:26.456505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.121167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-10-02T11:40:26.456569Z","caller":"traceutil/trace.go:171","msg":"trace[2109527933] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:421; }","duration":"147.20134ms","start":"2023-10-02T11:40:26.309358Z","end":"2023-10-02T11:40:26.456559Z","steps":["trace[2109527933] 'agreement among raft nodes before linearized reading'  (duration: 147.125246ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a06bbbfd0155d10b4a484c48b10512e04e8107755eba1c0ed574c6e72af761f8] <==
	* 2023/10/02 11:41:46 GCP Auth Webhook started!
	2023/10/02 11:41:57 Ready to marshal response ...
	2023/10/02 11:41:57 Ready to write response ...
	2023/10/02 11:41:57 Ready to marshal response ...
	2023/10/02 11:41:57 Ready to write response ...
	2023/10/02 11:42:05 Ready to marshal response ...
	2023/10/02 11:42:05 Ready to write response ...
	2023/10/02 11:42:06 Ready to marshal response ...
	2023/10/02 11:42:06 Ready to write response ...
	2023/10/02 11:42:13 Ready to marshal response ...
	2023/10/02 11:42:13 Ready to write response ...
	2023/10/02 11:42:24 Ready to marshal response ...
	2023/10/02 11:42:24 Ready to write response ...
	2023/10/02 11:42:24 Ready to marshal response ...
	2023/10/02 11:42:24 Ready to write response ...
	2023/10/02 11:42:24 Ready to marshal response ...
	2023/10/02 11:42:24 Ready to write response ...
	2023/10/02 11:42:49 Ready to marshal response ...
	2023/10/02 11:42:49 Ready to write response ...
	2023/10/02 11:43:19 Ready to marshal response ...
	2023/10/02 11:43:19 Ready to write response ...
	2023/10/02 11:44:35 Ready to marshal response ...
	2023/10/02 11:44:35 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:45:01 up 19:27,  0 users,  load average: 0.82, 1.73, 2.18
	Linux addons-346248 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [915b3a613a5367d8d939660fca4eae4632b799c38ccf631a25ea657977f98e51] <==
	* I1002 11:42:55.552376       1 main.go:227] handling current node
	I1002 11:43:05.565897       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:05.565927       1 main.go:227] handling current node
	I1002 11:43:15.577102       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:15.577137       1 main.go:227] handling current node
	I1002 11:43:25.582025       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:25.582063       1 main.go:227] handling current node
	I1002 11:43:35.594195       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:35.594225       1 main.go:227] handling current node
	I1002 11:43:45.606219       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:45.606248       1 main.go:227] handling current node
	I1002 11:43:55.610540       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:43:55.610570       1 main.go:227] handling current node
	I1002 11:44:05.622124       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:05.622153       1 main.go:227] handling current node
	I1002 11:44:15.627406       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:15.627453       1 main.go:227] handling current node
	I1002 11:44:25.641073       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:25.641278       1 main.go:227] handling current node
	I1002 11:44:35.657485       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:35.657593       1 main.go:227] handling current node
	I1002 11:44:45.669826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:45.669856       1 main.go:227] handling current node
	I1002 11:44:55.680706       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:44:55.680733       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [15ce2eaf7045df542897158ee550bf6c264e0052a7d63b00225315244e772641] <==
	* I1002 11:42:17.338288       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1002 11:42:18.376761       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1002 11:42:22.723955       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1002 11:42:24.162137       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.170.179"}
	I1002 11:43:01.018092       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 11:43:34.631077       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.631137       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.653130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.653195       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.676644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.677343       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.680256       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.681164       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.694391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.694454       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.701695       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.701846       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.715573       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.715626       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 11:43:34.738700       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 11:43:34.739138       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 11:43:35.682184       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 11:43:35.738860       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1002 11:43:35.742010       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1002 11:44:35.602590       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.52.136"}
	
	* 
	* ==> kube-controller-manager [ddcdd0a6a40fe62e96fb42d6f9c2de44c1a900382b9a16313dbf1fc5327af2ae] <==
	* E1002 11:44:07.426251       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 11:44:15.424569       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:15.424628       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 11:44:20.662729       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:20.662761       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 11:44:25.011961       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:25.011998       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 11:44:35.302405       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1002 11:44:35.341630       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-xbvmr"
	I1002 11:44:35.348407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.285113ms"
	I1002 11:44:35.374907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.096291ms"
	I1002 11:44:35.375617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.356µs"
	I1002 11:44:38.016764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.929µs"
	I1002 11:44:39.011627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.223µs"
	I1002 11:44:40.016140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.781µs"
	W1002 11:44:44.900607       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:44.900641       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 11:44:51.962929       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:51.963023       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 11:44:52.750753       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1002 11:44:52.756092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-f6b66b4b9" duration="7.385µs"
	I1002 11:44:52.758433       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1002 11:44:54.370586       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 11:44:54.370619       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 11:44:56.064143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="90.363µs"
	
	* 
	* ==> kube-proxy [a327c277e193cdb059a583da9aea5b97de9ee8432fcf7a6e5a424364c2c232f1] <==
	* I1002 11:40:28.330034       1 server_others.go:69] "Using iptables proxy"
	I1002 11:40:28.361506       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1002 11:40:28.414864       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 11:40:28.417407       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:40:28.417449       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 11:40:28.417457       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 11:40:28.417533       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:40:28.417772       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:40:28.417789       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:40:28.419123       1 config.go:188] "Starting service config controller"
	I1002 11:40:28.419244       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:40:28.419279       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:40:28.419284       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:40:28.424749       1 config.go:315] "Starting node config controller"
	I1002 11:40:28.424844       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:40:28.521980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:40:28.531285       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:40:28.530195       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ea1acecd5d2128865696387f7dea3362cefe7b18a0cb9e7e86e3b701ead82af8] <==
	* W1002 11:40:07.241903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:40:07.241992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 11:40:07.242031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:40:07.242060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 11:40:07.241999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 11:40:07.242078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 11:40:07.241970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:40:07.242103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 11:40:07.241600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:40:07.242116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 11:40:07.241755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 11:40:07.242129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 11:40:07.241937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:40:07.242140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 11:40:07.242228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:40:07.242243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 11:40:07.242283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:40:07.242300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 11:40:07.242358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:40:07.242372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 11:40:07.242402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:40:07.242462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 11:40:07.242411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:40:07.242549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1002 11:40:08.737824       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 02 11:44:51 addons-346248 kubelet[1363]: I1002 11:44:51.605690    1363 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cgvbn\" (UniqueName: \"kubernetes.io/projected/61ac6b36-7371-4a90-a337-92f82d4ed43b-kube-api-access-cgvbn\") on node \"addons-346248\" DevicePath \"\""
	Oct 02 11:44:52 addons-346248 kubelet[1363]: I1002 11:44:52.033962    1363 scope.go:117] "RemoveContainer" containerID="4ac2952d32861b393f07a8900ae9c81871dd16189ea99e9fa4a9cd1d1e972e34"
	Oct 02 11:44:53 addons-346248 kubelet[1363]: I1002 11:44:53.847618    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="61ac6b36-7371-4a90-a337-92f82d4ed43b" path="/var/lib/kubelet/pods/61ac6b36-7371-4a90-a337-92f82d4ed43b/volumes"
	Oct 02 11:44:53 addons-346248 kubelet[1363]: I1002 11:44:53.848461    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ca7e270-87be-4dff-9146-f1d1d6405fd2" path="/var/lib/kubelet/pods/8ca7e270-87be-4dff-9146-f1d1d6405fd2/volumes"
	Oct 02 11:44:53 addons-346248 kubelet[1363]: I1002 11:44:53.850390    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a49d568c-6cb0-4343-8b63-211748f6b922" path="/var/lib/kubelet/pods/a49d568c-6cb0-4343-8b63-211748f6b922/volumes"
	Oct 02 11:44:55 addons-346248 kubelet[1363]: I1002 11:44:55.846321    1363 scope.go:117] "RemoveContainer" containerID="3087bcb1df3ab98a244ae3d74a4e0ebc232d8850dd0426eebbd1a8f54e457d5f"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.045307    1363 scope.go:117] "RemoveContainer" containerID="3087bcb1df3ab98a244ae3d74a4e0ebc232d8850dd0426eebbd1a8f54e457d5f"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.045529    1363 scope.go:117] "RemoveContainer" containerID="d2f8c28463566d1cd3de77e2a967b176329c89cd2683b90755273f74b256278a"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.045796    1363 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-xbvmr_default(a8a822f1-adcf-4cc1-9511-e8d819f3e2b6)\"" pod="default/hello-world-app-5d77478584-xbvmr" podUID="a8a822f1-adcf-4cc1-9511-e8d819f3e2b6"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.074890    1363 scope.go:117] "RemoveContainer" containerID="91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.117170    1363 scope.go:117] "RemoveContainer" containerID="91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.117566    1363 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110\": container with ID starting with 91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110 not found: ID does not exist" containerID="91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.117613    1363 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110"} err="failed to get container status \"91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110\": rpc error: code = NotFound desc = could not find container \"91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110\": container with ID starting with 91efdd16f6486e9c3121aed9fa5c35a63ac668aeea4d915522cd14237e6b5110 not found: ID does not exist"
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.138003    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th72k\" (UniqueName: \"kubernetes.io/projected/8c0d4f6b-06d8-4a01-ac73-303cceafad58-kube-api-access-th72k\") pod \"8c0d4f6b-06d8-4a01-ac73-303cceafad58\" (UID: \"8c0d4f6b-06d8-4a01-ac73-303cceafad58\") "
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.138068    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c0d4f6b-06d8-4a01-ac73-303cceafad58-webhook-cert\") pod \"8c0d4f6b-06d8-4a01-ac73-303cceafad58\" (UID: \"8c0d4f6b-06d8-4a01-ac73-303cceafad58\") "
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.140685    1363 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0d4f6b-06d8-4a01-ac73-303cceafad58-kube-api-access-th72k" (OuterVolumeSpecName: "kube-api-access-th72k") pod "8c0d4f6b-06d8-4a01-ac73-303cceafad58" (UID: "8c0d4f6b-06d8-4a01-ac73-303cceafad58"). InnerVolumeSpecName "kube-api-access-th72k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.141234    1363 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0d4f6b-06d8-4a01-ac73-303cceafad58-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8c0d4f6b-06d8-4a01-ac73-303cceafad58" (UID: "8c0d4f6b-06d8-4a01-ac73-303cceafad58"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.239342    1363 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c0d4f6b-06d8-4a01-ac73-303cceafad58-webhook-cert\") on node \"addons-346248\" DevicePath \"\""
	Oct 02 11:44:56 addons-346248 kubelet[1363]: I1002 11:44:56.239385    1363 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-th72k\" (UniqueName: \"kubernetes.io/projected/8c0d4f6b-06d8-4a01-ac73-303cceafad58-kube-api-access-th72k\") on node \"addons-346248\" DevicePath \"\""
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.501180    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/042b1e78ba63cf54b740a1b6f8ff9a172474058a4c9e4bf396319fa27509579b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/042b1e78ba63cf54b740a1b6f8ff9a172474058a4c9e4bf396319fa27509579b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.776699    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/16a7f2b39d930bec58bfb3942407fc8275312491014926d207c601dd334bab00/diff" to get inode usage: stat /var/lib/containers/storage/overlay/16a7f2b39d930bec58bfb3942407fc8275312491014926d207c601dd334bab00/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.877413    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fb8adb06acd27afcb5df396c6386f2e86a2ce05b12581ed670ee74c390b6b3dc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fb8adb06acd27afcb5df396c6386f2e86a2ce05b12581ed670ee74c390b6b3dc/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 11:44:56 addons-346248 kubelet[1363]: E1002 11:44:56.885593    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5f47e55af47fc5364cf234894be8875a176e7aadeff780b016dcbcdef9e66c8b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5f47e55af47fc5364cf234894be8875a176e7aadeff780b016dcbcdef9e66c8b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 11:44:57 addons-346248 kubelet[1363]: I1002 11:44:57.847595    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8c0d4f6b-06d8-4a01-ac73-303cceafad58" path="/var/lib/kubelet/pods/8c0d4f6b-06d8-4a01-ac73-303cceafad58/volumes"
	Oct 02 11:45:01 addons-346248 kubelet[1363]: E1002 11:45:01.629611    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1956681cb67149941d87ef64a7b3907379e3a9f66e8e42f30425c6b378d27b00/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1956681cb67149941d87ef64a7b3907379e3a9f66e8e42f30425c6b378d27b00/diff: no such file or directory, extraDiskErr: <nil>
	
	* 
	* ==> storage-provisioner [fd2ba1092921a126a531f77035e18c56a8a4a6ea5741e99cf1c8122a1564fe9a] <==
	* I1002 11:40:57.120012       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:40:57.134854       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:40:57.134942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:40:57.146288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:40:57.146489       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-346248_d764f598-e2be-46a3-a2ca-fdde86459f1c!
	I1002 11:40:57.147487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf02dced-99bf-451c-b2c0-caa94ed4855a", APIVersion:"v1", ResourceVersion:"857", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-346248_d764f598-e2be-46a3-a2ca-fdde86459f1c became leader
	I1002 11:40:57.247537       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-346248_d764f598-e2be-46a3-a2ca-fdde86459f1c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-346248 -n addons-346248
helpers_test.go:261: (dbg) Run:  kubectl --context addons-346248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (170.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-999051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1002 11:52:23.941651 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-999051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.681516172s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-999051 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-999051 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f6aab662-58ff-45c3-8f6e-0c760b311604] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f6aab662-58ff-45c3-8f6e-0c760b311604] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.024012027s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1002 11:54:24.742713 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:24.748260 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:24.758575 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:24.778821 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:24.819074 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:24.899466 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:25.059855 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:25.380422 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:26.021230 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:27.301423 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:29.861657 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:34.982771 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:54:45.223066 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-999051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.937199251s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-999051 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009972265s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons disable ingress-dns --alsologtostderr -v=1: (2.392535607s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons disable ingress --alsologtostderr -v=1
E1002 11:55:05.703854 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons disable ingress --alsologtostderr -v=1: (7.595881871s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-999051
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-999051:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c",
	        "Created": "2023-10-02T11:50:50.927474854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2527713,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T11:50:51.267204068Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c/hosts",
	        "LogPath": "/var/lib/docker/containers/7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c/7b98f472af9d491f81c5b8eb824d0a8166fe8fb20a334f890f3e0bb83993480c-json.log",
	        "Name": "/ingress-addon-legacy-999051",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-999051:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-999051",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49ff5ec9815974c6c05f82b6d379c77362c21494d03ec2a466e2794ef50b6a66-init/diff:/var/lib/docker/overlay2/1ffc828a09df1e9fa25f5092ba7b162a0fa5a6fe031a41b1f614792625eb1522/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49ff5ec9815974c6c05f82b6d379c77362c21494d03ec2a466e2794ef50b6a66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49ff5ec9815974c6c05f82b6d379c77362c21494d03ec2a466e2794ef50b6a66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49ff5ec9815974c6c05f82b6d379c77362c21494d03ec2a466e2794ef50b6a66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-999051",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-999051/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-999051",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-999051",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-999051",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c992e92862c33ffd04fb643638661d88c6c032f64593b39a8b5f7c4d9b33e8d9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35886"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35883"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35884"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c992e92862c3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-999051": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7b98f472af9d",
	                        "ingress-addon-legacy-999051"
	                    ],
	                    "NetworkID": "40673c2f78df38cb63a34a590805b5bbf40cb13234de26dce6e248c1ea648239",
	                    "EndpointID": "cc12bf87623f39b98fab1dbb5f2d60feb44f450b6778f16dd766a2c162ace79d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-999051 -n ingress-addon-legacy-999051
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-999051 logs -n 25: (1.453782511s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-262988 image load --daemon                                  | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-262988               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image ls                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| image   | functional-262988 image load --daemon                                  | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-262988               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image ls                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| image   | functional-262988 image save                                           | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-262988               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image rm                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-262988               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image ls                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| image   | functional-262988 image load                                           | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image ls                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| image   | functional-262988 image save --daemon                                  | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-262988               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988                                                      | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988                                                      | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-262988 ssh pgrep                                            | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-262988                                                      | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988                                                      | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-262988 image build -t                                       | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	|         | localhost/my-image:functional-262988                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-262988 image ls                                             | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| delete  | -p functional-262988                                                   | functional-262988           | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:50 UTC |
	| start   | -p ingress-addon-legacy-999051                                         | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:52 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-999051                                            | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:52 UTC | 02 Oct 23 11:52 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-999051                                            | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:52 UTC | 02 Oct 23 11:52 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-999051                                            | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:52 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-999051 ip                                         | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:54 UTC | 02 Oct 23 11:54 UTC |
	| addons  | ingress-addon-legacy-999051                                            | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:55 UTC | 02 Oct 23 11:55 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-999051                                            | ingress-addon-legacy-999051 | jenkins | v1.31.2 | 02 Oct 23 11:55 UTC | 02 Oct 23 11:55 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:32.896789 2527262 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:32.897047 2527262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:32.897074 2527262 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:32.897093 2527262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:32.897385 2527262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 11:50:32.897871 2527262 out.go:303] Setting JSON to false
	I1002 11:50:32.898961 2527262 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":70379,"bootTime":1696177054,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:50:32.899062 2527262 start.go:138] virtualization:  
	I1002 11:50:32.901567 2527262 out.go:177] * [ingress-addon-legacy-999051] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 11:50:32.903670 2527262 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:32.905867 2527262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:32.903892 2527262 notify.go:220] Checking for updates...
	I1002 11:50:32.908203 2527262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:50:32.910056 2527262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:50:32.911947 2527262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 11:50:32.913927 2527262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:32.915988 2527262 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:32.942917 2527262 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:50:32.943019 2527262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:50:33.053225 2527262 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 11:50:33.038447808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:50:33.053351 2527262 docker.go:294] overlay module found
	I1002 11:50:33.055578 2527262 out.go:177] * Using the docker driver based on user configuration
	I1002 11:50:33.057326 2527262 start.go:298] selected driver: docker
	I1002 11:50:33.057365 2527262 start.go:902] validating driver "docker" against <nil>
	I1002 11:50:33.057386 2527262 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:33.058086 2527262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:50:33.130933 2527262 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 11:50:33.120912294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:50:33.131133 2527262 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 11:50:33.131376 2527262 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:33.133619 2527262 out.go:177] * Using Docker driver with root privileges
	I1002 11:50:33.135586 2527262 cni.go:84] Creating CNI manager for ""
	I1002 11:50:33.135613 2527262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:50:33.135627 2527262 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 11:50:33.135645 2527262 start_flags.go:321] config:
	{Name:ingress-addon-legacy-999051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-999051 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:33.139991 2527262 out.go:177] * Starting control plane node ingress-addon-legacy-999051 in cluster ingress-addon-legacy-999051
	I1002 11:50:33.142144 2527262 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 11:50:33.144171 2527262 out.go:177] * Pulling base image ...
	I1002 11:50:33.146088 2527262 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 11:50:33.146174 2527262 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 11:50:33.164212 2527262 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 11:50:33.164243 2527262 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 11:50:33.207404 2527262 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1002 11:50:33.207428 2527262 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:33.207572 2527262 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 11:50:33.209912 2527262 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 11:50:33.212185 2527262 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:50:33.329736 2527262 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1002 11:50:43.125350 2527262 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:50:43.126096 2527262 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:50:44.326658 2527262 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1002 11:50:44.327035 2527262 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/config.json ...
	I1002 11:50:44.327074 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/config.json: {Name:mke96e1195796b814147ed4221f960e36e4893ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:50:44.327284 2527262 cache.go:195] Successfully downloaded all kic artifacts
	I1002 11:50:44.327361 2527262 start.go:365] acquiring machines lock for ingress-addon-legacy-999051: {Name:mk8a318b71d906b326cacdb2097f706187e7786a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:44.327428 2527262 start.go:369] acquired machines lock for "ingress-addon-legacy-999051" in 50.634µs
	I1002 11:50:44.327453 2527262 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-999051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-999051 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:50:44.327526 2527262 start.go:125] createHost starting for "" (driver="docker")
	I1002 11:50:44.330048 2527262 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 11:50:44.330438 2527262 start.go:159] libmachine.API.Create for "ingress-addon-legacy-999051" (driver="docker")
	I1002 11:50:44.330472 2527262 client.go:168] LocalClient.Create starting
	I1002 11:50:44.330539 2527262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem
	I1002 11:50:44.330577 2527262 main.go:141] libmachine: Decoding PEM data...
	I1002 11:50:44.330596 2527262 main.go:141] libmachine: Parsing certificate...
	I1002 11:50:44.330679 2527262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem
	I1002 11:50:44.330702 2527262 main.go:141] libmachine: Decoding PEM data...
	I1002 11:50:44.330717 2527262 main.go:141] libmachine: Parsing certificate...
	I1002 11:50:44.331117 2527262 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-999051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 11:50:44.348492 2527262 cli_runner.go:211] docker network inspect ingress-addon-legacy-999051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 11:50:44.348623 2527262 network_create.go:281] running [docker network inspect ingress-addon-legacy-999051] to gather additional debugging logs...
	I1002 11:50:44.348648 2527262 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-999051
	W1002 11:50:44.371724 2527262 cli_runner.go:211] docker network inspect ingress-addon-legacy-999051 returned with exit code 1
	I1002 11:50:44.371758 2527262 network_create.go:284] error running [docker network inspect ingress-addon-legacy-999051]: docker network inspect ingress-addon-legacy-999051: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-999051 not found
	I1002 11:50:44.371776 2527262 network_create.go:286] output of [docker network inspect ingress-addon-legacy-999051]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-999051 not found
	
	** /stderr **
	I1002 11:50:44.371844 2527262 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 11:50:44.394274 2527262 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000f0daa0}
	I1002 11:50:44.394331 2527262 network_create.go:123] attempt to create docker network ingress-addon-legacy-999051 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 11:50:44.394424 2527262 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-999051 ingress-addon-legacy-999051
	I1002 11:50:44.469081 2527262 network_create.go:107] docker network ingress-addon-legacy-999051 192.168.49.0/24 created
	I1002 11:50:44.469116 2527262 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-999051" container
	I1002 11:50:44.469194 2527262 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 11:50:44.487232 2527262 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-999051 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-999051 --label created_by.minikube.sigs.k8s.io=true
	I1002 11:50:44.506618 2527262 oci.go:103] Successfully created a docker volume ingress-addon-legacy-999051
	I1002 11:50:44.506742 2527262 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-999051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-999051 --entrypoint /usr/bin/test -v ingress-addon-legacy-999051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 11:50:46.024607 2527262 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-999051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-999051 --entrypoint /usr/bin/test -v ingress-addon-legacy-999051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.517813089s)
	I1002 11:50:46.024641 2527262 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-999051
	I1002 11:50:46.024674 2527262 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 11:50:46.024696 2527262 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 11:50:46.024789 2527262 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-999051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 11:50:50.842436 2527262 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-999051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.81760143s)
	I1002 11:50:50.842471 2527262 kic.go:199] duration metric: took 4.817772 seconds to extract preloaded images to volume
	W1002 11:50:50.842641 2527262 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 11:50:50.842761 2527262 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 11:50:50.911251 2527262 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-999051 --name ingress-addon-legacy-999051 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-999051 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-999051 --network ingress-addon-legacy-999051 --ip 192.168.49.2 --volume ingress-addon-legacy-999051:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 11:50:51.277333 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Running}}
	I1002 11:50:51.306434 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:50:51.329074 2527262 cli_runner.go:164] Run: docker exec ingress-addon-legacy-999051 stat /var/lib/dpkg/alternatives/iptables
	I1002 11:50:51.415648 2527262 oci.go:144] the created container "ingress-addon-legacy-999051" has a running status.
	I1002 11:50:51.415675 2527262 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa...
	I1002 11:50:51.687911 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 11:50:51.688006 2527262 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 11:50:51.722503 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:50:51.750097 2527262 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 11:50:51.750115 2527262 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-999051 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 11:50:51.870184 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:50:51.896058 2527262 machine.go:88] provisioning docker machine ...
	I1002 11:50:51.896087 2527262 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-999051"
	I1002 11:50:51.896155 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:51.923478 2527262 main.go:141] libmachine: Using SSH client type: native
	I1002 11:50:51.923914 2527262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35887 <nil> <nil>}
	I1002 11:50:51.923935 2527262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-999051 && echo "ingress-addon-legacy-999051" | sudo tee /etc/hostname
	I1002 11:50:51.924616 2527262 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 11:50:55.092426 2527262 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-999051
	
	I1002 11:50:55.092598 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:55.118537 2527262 main.go:141] libmachine: Using SSH client type: native
	I1002 11:50:55.118999 2527262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35887 <nil> <nil>}
	I1002 11:50:55.119019 2527262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-999051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-999051/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-999051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:50:55.266412 2527262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:50:55.266438 2527262 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 11:50:55.266469 2527262 ubuntu.go:177] setting up certificates
	I1002 11:50:55.266478 2527262 provision.go:83] configureAuth start
	I1002 11:50:55.266550 2527262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-999051
	I1002 11:50:55.285003 2527262 provision.go:138] copyHostCerts
	I1002 11:50:55.285054 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 11:50:55.285087 2527262 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 11:50:55.285098 2527262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 11:50:55.285179 2527262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 11:50:55.285266 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 11:50:55.285291 2527262 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 11:50:55.285300 2527262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 11:50:55.285328 2527262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 11:50:55.285374 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 11:50:55.285396 2527262 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 11:50:55.285403 2527262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 11:50:55.285429 2527262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 11:50:55.285479 2527262 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-999051 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-999051]
	I1002 11:50:55.927002 2527262 provision.go:172] copyRemoteCerts
	I1002 11:50:55.927069 2527262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:50:55.927110 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:55.950280 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:50:56.052141 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:50:56.052218 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1002 11:50:56.081785 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:50:56.081856 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:50:56.111824 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:50:56.111934 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:50:56.141132 2527262 provision.go:86] duration metric: configureAuth took 874.613261ms
	I1002 11:50:56.141163 2527262 ubuntu.go:193] setting minikube options for container-runtime
	I1002 11:50:56.141362 2527262 config.go:182] Loaded profile config "ingress-addon-legacy-999051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 11:50:56.141470 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:56.161928 2527262 main.go:141] libmachine: Using SSH client type: native
	I1002 11:50:56.162361 2527262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35887 <nil> <nil>}
	I1002 11:50:56.162389 2527262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:50:56.441513 2527262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:50:56.441538 2527262 machine.go:91] provisioned docker machine in 4.545461585s
	I1002 11:50:56.441549 2527262 client.go:171] LocalClient.Create took 12.111067155s
	I1002 11:50:56.441559 2527262 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-999051" took 12.111121316s
	I1002 11:50:56.441567 2527262 start.go:300] post-start starting for "ingress-addon-legacy-999051" (driver="docker")
	I1002 11:50:56.441576 2527262 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:50:56.441646 2527262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:50:56.441694 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:56.467362 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:50:56.568010 2527262 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:50:56.572931 2527262 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 11:50:56.572966 2527262 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 11:50:56.572977 2527262 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 11:50:56.572984 2527262 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 11:50:56.572995 2527262 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 11:50:56.573055 2527262 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 11:50:56.573133 2527262 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 11:50:56.573141 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /etc/ssl/certs/24995982.pem
	I1002 11:50:56.573250 2527262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:50:56.583935 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 11:50:56.613741 2527262 start.go:303] post-start completed in 172.159107ms
	I1002 11:50:56.614112 2527262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-999051
	I1002 11:50:56.632376 2527262 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/config.json ...
	I1002 11:50:56.632768 2527262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 11:50:56.632825 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:56.654678 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:50:56.750502 2527262 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 11:50:56.756820 2527262 start.go:128] duration metric: createHost completed in 12.429279454s
	I1002 11:50:56.756843 2527262 start.go:83] releasing machines lock for "ingress-addon-legacy-999051", held for 12.42940121s
	I1002 11:50:56.756936 2527262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-999051
	I1002 11:50:56.774511 2527262 ssh_runner.go:195] Run: cat /version.json
	I1002 11:50:56.774549 2527262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:50:56.774564 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:56.774610 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:50:56.795232 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:50:56.796773 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:50:57.034711 2527262 ssh_runner.go:195] Run: systemctl --version
	I1002 11:50:57.041041 2527262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:50:57.192472 2527262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:50:57.198512 2527262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:50:57.224218 2527262 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 11:50:57.224329 2527262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:50:57.270626 2527262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 11:50:57.270687 2527262 start.go:469] detecting cgroup driver to use...
	I1002 11:50:57.270735 2527262 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 11:50:57.270812 2527262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:50:57.291996 2527262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:50:57.305871 2527262 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:50:57.305956 2527262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:50:57.322883 2527262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:50:57.340599 2527262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:50:57.444321 2527262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:50:57.553715 2527262 docker.go:213] disabling docker service ...
	I1002 11:50:57.553837 2527262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:50:57.576614 2527262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:50:57.591702 2527262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:50:57.686949 2527262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:50:57.795413 2527262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:50:57.808862 2527262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:50:57.828240 2527262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 11:50:57.828315 2527262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:50:57.840445 2527262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:50:57.840600 2527262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:50:57.852347 2527262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:50:57.864127 2527262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:50:57.875813 2527262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:50:57.887228 2527262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:50:57.897923 2527262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:50:57.910784 2527262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:50:58.001138 2527262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:50:58.133135 2527262 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:50:58.133201 2527262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:50:58.138055 2527262 start.go:537] Will wait 60s for crictl version
	I1002 11:50:58.138115 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:50:58.142734 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:50:58.184493 2527262 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 11:50:58.184604 2527262 ssh_runner.go:195] Run: crio --version
	I1002 11:50:58.233141 2527262 ssh_runner.go:195] Run: crio --version
	I1002 11:50:58.282921 2527262 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1002 11:50:58.285009 2527262 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-999051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 11:50:58.302967 2527262 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 11:50:58.307652 2527262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:50:58.321075 2527262 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 11:50:58.321147 2527262 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:50:58.377109 2527262 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 11:50:58.377186 2527262 ssh_runner.go:195] Run: which lz4
	I1002 11:50:58.381847 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1002 11:50:58.381945 2527262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:50:58.386267 2527262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:50:58.386301 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1002 11:51:00.677225 2527262 crio.go:444] Took 2.295317 seconds to copy over tarball
	I1002 11:51:00.677334 2527262 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:51:03.351667 2527262 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.674296194s)
	I1002 11:51:03.351740 2527262 crio.go:451] Took 2.674485 seconds to extract the tarball
	I1002 11:51:03.351758 2527262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:51:03.489476 2527262 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:51:03.534318 2527262 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 11:51:03.534343 2527262 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:51:03.534419 2527262 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 11:51:03.534463 2527262 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 11:51:03.534609 2527262 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 11:51:03.534432 2527262 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:03.534681 2527262 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 11:51:03.534732 2527262 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 11:51:03.534609 2527262 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 11:51:03.534820 2527262 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 11:51:03.535966 2527262 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:03.536498 2527262 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 11:51:03.536829 2527262 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 11:51:03.536996 2527262 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 11:51:03.537124 2527262 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 11:51:03.537257 2527262 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1002 11:51:03.537511 2527262 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 11:51:03.537653 2527262 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W1002 11:51:03.889418 2527262 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.889723 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1002 11:51:03.935960 2527262 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1002 11:51:03.936035 2527262 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 11:51:03.936096 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:03.943658 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	W1002 11:51:03.953400 2527262 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.953563 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1002 11:51:03.975853 2527262 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.976060 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1002 11:51:03.985460 2527262 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.985641 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 11:51:03.990427 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1002 11:51:03.990661 2527262 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.990876 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1002 11:51:03.991625 2527262 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:03.991762 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 11:51:04.033466 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1002 11:51:04.050690 2527262 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1002 11:51:04.050794 2527262 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 11:51:04.050871 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.160018 2527262 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1002 11:51:04.160058 2527262 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 11:51:04.160113 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.160191 2527262 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1002 11:51:04.160215 2527262 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 11:51:04.160236 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.178135 2527262 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1002 11:51:04.178181 2527262 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 11:51:04.178226 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.178300 2527262 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1002 11:51:04.178317 2527262 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1002 11:51:04.178343 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.185683 2527262 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1002 11:51:04.185742 2527262 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 11:51:04.185814 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.185896 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 11:51:04.185968 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 11:51:04.186025 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1002 11:51:04.186112 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 11:51:04.189102 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 11:51:04.208557 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	W1002 11:51:04.234112 2527262 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 11:51:04.234281 2527262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:04.371994 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1002 11:51:04.372107 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1002 11:51:04.372226 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1002 11:51:04.372313 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1002 11:51:04.372353 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1002 11:51:04.372485 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1002 11:51:04.494830 2527262 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 11:51:04.494873 2527262 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:04.494921 2527262 ssh_runner.go:195] Run: which crictl
	I1002 11:51:04.499364 2527262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:04.560019 2527262 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:51:04.560095 2527262 cache_images.go:92] LoadImages completed in 1.025736638s
	W1002 11:51:04.560180 2527262 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1002 11:51:04.560268 2527262 ssh_runner.go:195] Run: crio config
	I1002 11:51:04.615015 2527262 cni.go:84] Creating CNI manager for ""
	I1002 11:51:04.615037 2527262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:51:04.615068 2527262 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:51:04.615091 2527262 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-999051 NodeName:ingress-addon-legacy-999051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:51:04.615228 2527262 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-999051"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:51:04.615314 2527262 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-999051 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-999051 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:51:04.615381 2527262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 11:51:04.626418 2527262 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:51:04.626537 2527262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:51:04.637433 2527262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1002 11:51:04.659658 2527262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 11:51:04.682090 2527262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1002 11:51:04.704472 2527262 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 11:51:04.709087 2527262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:51:04.723596 2527262 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051 for IP: 192.168.49.2
	I1002 11:51:04.723678 2527262 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:04.723851 2527262 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 11:51:04.723898 2527262 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 11:51:04.723951 2527262 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key
	I1002 11:51:04.723968 2527262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt with IP's: []
	I1002 11:51:05.041733 2527262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt ...
	I1002 11:51:05.041768 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: {Name:mk41f681be3c0ff7298ded949c216ac99e8bc622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:05.041977 2527262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key ...
	I1002 11:51:05.041989 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key: {Name:mkfe2451d2454d81482112834dc38182174fddae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:05.042078 2527262 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key.dd3b5fb2
	I1002 11:51:05.042096 2527262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 11:51:05.865734 2527262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt.dd3b5fb2 ...
	I1002 11:51:05.865770 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt.dd3b5fb2: {Name:mk86097b8d0ce8d7fc664706f2c6a3bf4b1f7fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:05.865958 2527262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key.dd3b5fb2 ...
	I1002 11:51:05.865971 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key.dd3b5fb2: {Name:mk50282a41802abe3d19b5dd0945b35bd0a12758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:05.866051 2527262 certs.go:337] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt
	I1002 11:51:05.866133 2527262 certs.go:341] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key
	I1002 11:51:05.866192 2527262 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.key
	I1002 11:51:05.866208 2527262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.crt with IP's: []
	I1002 11:51:06.084508 2527262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.crt ...
	I1002 11:51:06.084550 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.crt: {Name:mk9f99435ac88a7436fbec0bebdd93fd2c20f9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:06.084758 2527262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.key ...
	I1002 11:51:06.084772 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.key: {Name:mk2fe3a7047499dccc6f9c42ed29223216cd2340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:06.084870 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 11:51:06.084895 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 11:51:06.084907 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 11:51:06.084921 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 11:51:06.084937 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:51:06.084953 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:51:06.084968 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:51:06.084984 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:51:06.085042 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 11:51:06.085081 2527262 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 11:51:06.085097 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:51:06.085125 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:51:06.085152 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:51:06.085189 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 11:51:06.085240 2527262 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 11:51:06.085276 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:51:06.085294 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem -> /usr/share/ca-certificates/2499598.pem
	I1002 11:51:06.085306 2527262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /usr/share/ca-certificates/24995982.pem
	I1002 11:51:06.085931 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:51:06.117891 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 11:51:06.148475 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:51:06.177608 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:51:06.206402 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:51:06.234563 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 11:51:06.262759 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:51:06.290348 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 11:51:06.318324 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:51:06.346552 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 11:51:06.374269 2527262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 11:51:06.402743 2527262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:51:06.423956 2527262 ssh_runner.go:195] Run: openssl version
	I1002 11:51:06.431068 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:51:06.443216 2527262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:51:06.447851 2527262 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:51:06.447955 2527262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:51:06.456374 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:51:06.468209 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 11:51:06.480271 2527262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 11:51:06.485067 2527262 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 11:51:06.485190 2527262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 11:51:06.493826 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 11:51:06.505711 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 11:51:06.517744 2527262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 11:51:06.522889 2527262 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 11:51:06.522957 2527262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 11:51:06.531764 2527262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:51:06.543773 2527262 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:51:06.548379 2527262 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:51:06.548450 2527262 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-999051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-999051 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:51:06.548562 2527262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:51:06.548647 2527262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:51:06.589758 2527262 cri.go:89] found id: ""
	I1002 11:51:06.589878 2527262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:51:06.600414 2527262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:51:06.611087 2527262 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 11:51:06.611185 2527262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:51:06.622131 2527262 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:51:06.622175 2527262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 11:51:06.679231 2527262 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 11:51:06.679481 2527262 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:51:06.736854 2527262 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:51:06.736949 2527262 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:51:06.737000 2527262 kubeadm.go:322] OS: Linux
	I1002 11:51:06.737047 2527262 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 11:51:06.737096 2527262 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 11:51:06.737146 2527262 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 11:51:06.737196 2527262 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 11:51:06.737243 2527262 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 11:51:06.737292 2527262 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 11:51:06.829127 2527262 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:51:06.829311 2527262 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:51:06.829456 2527262 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:51:07.076930 2527262 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:51:07.078365 2527262 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:51:07.078663 2527262 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:51:07.185053 2527262 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:51:07.189856 2527262 out.go:204]   - Generating certificates and keys ...
	I1002 11:51:07.189986 2527262 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:51:07.190056 2527262 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:51:07.482944 2527262 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 11:51:07.692228 2527262 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 11:51:08.414221 2527262 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 11:51:09.060571 2527262 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 11:51:09.650860 2527262 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 11:51:09.651301 2527262 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-999051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 11:51:10.832158 2527262 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 11:51:10.832557 2527262 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-999051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 11:51:11.347056 2527262 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 11:51:11.634764 2527262 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 11:51:12.124818 2527262 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 11:51:12.125136 2527262 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:51:12.587274 2527262 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:51:12.841050 2527262 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:51:13.202010 2527262 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:51:13.472167 2527262 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:51:13.472938 2527262 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:51:13.475399 2527262 out.go:204]   - Booting up control plane ...
	I1002 11:51:13.475514 2527262 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:51:13.493505 2527262 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:51:13.496564 2527262 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:51:13.501321 2527262 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:51:13.505167 2527262 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:51:27.508289 2527262 kubeadm.go:322] [apiclient] All control plane components are healthy after 14.002557 seconds
	I1002 11:51:27.508414 2527262 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:51:27.524777 2527262 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:51:28.047706 2527262 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:51:28.047848 2527262 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-999051 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:51:28.559970 2527262 kubeadm.go:322] [bootstrap-token] Using token: tbjwxx.pjo8tea1xns9k950
	I1002 11:51:28.561603 2527262 out.go:204]   - Configuring RBAC rules ...
	I1002 11:51:28.561743 2527262 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:51:28.571716 2527262 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:51:28.593967 2527262 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:51:28.598658 2527262 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:51:28.603535 2527262 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:51:28.608874 2527262 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:51:28.623343 2527262 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:51:28.913886 2527262 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:51:29.016506 2527262 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:51:29.016551 2527262 kubeadm.go:322] 
	I1002 11:51:29.016609 2527262 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:51:29.016614 2527262 kubeadm.go:322] 
	I1002 11:51:29.016687 2527262 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:51:29.016692 2527262 kubeadm.go:322] 
	I1002 11:51:29.016716 2527262 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:51:29.016771 2527262 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:51:29.016819 2527262 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:51:29.016824 2527262 kubeadm.go:322] 
	I1002 11:51:29.016873 2527262 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:51:29.016942 2527262 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:51:29.017006 2527262 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:51:29.017011 2527262 kubeadm.go:322] 
	I1002 11:51:29.017089 2527262 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:51:29.017178 2527262 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:51:29.017184 2527262 kubeadm.go:322] 
	I1002 11:51:29.017262 2527262 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbjwxx.pjo8tea1xns9k950 \
	I1002 11:51:29.017361 2527262 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 \
	I1002 11:51:29.017383 2527262 kubeadm.go:322]     --control-plane 
	I1002 11:51:29.017388 2527262 kubeadm.go:322] 
	I1002 11:51:29.017467 2527262 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:51:29.017471 2527262 kubeadm.go:322] 
	I1002 11:51:29.017548 2527262 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbjwxx.pjo8tea1xns9k950 \
	I1002 11:51:29.017646 2527262 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 
	I1002 11:51:29.020923 2527262 kubeadm.go:322] W1002 11:51:06.678365    1225 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 11:51:29.021250 2527262 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:51:29.021381 2527262 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:51:29.021521 2527262 kubeadm.go:322] W1002 11:51:13.493903    1225 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 11:51:29.021647 2527262 kubeadm.go:322] W1002 11:51:13.499506    1225 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 11:51:29.021663 2527262 cni.go:84] Creating CNI manager for ""
	I1002 11:51:29.021671 2527262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:51:29.024362 2527262 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 11:51:29.026444 2527262 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:51:29.031623 2527262 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1002 11:51:29.031642 2527262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:51:29.058878 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:51:29.483594 2527262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:51:29.483740 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:29.483815 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=ingress-addon-legacy-999051 minikube.k8s.io/updated_at=2023_10_02T11_51_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:29.646467 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:29.646563 2527262 ops.go:34] apiserver oom_adj: -16
	I1002 11:51:29.750142 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:30.344668 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:30.844276 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:31.344418 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:31.844933 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:32.344766 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:32.844176 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:33.344732 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:33.845097 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:34.344697 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:34.844340 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:35.344336 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:35.844229 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:36.344300 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:36.844727 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:37.344700 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:37.844123 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:38.344584 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:38.845025 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:39.344116 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:39.844769 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:40.344155 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:40.844474 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:41.344335 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:41.844200 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:42.344278 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:42.844245 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:43.344927 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:43.844286 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:44.345010 2527262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:51:44.447975 2527262 kubeadm.go:1081] duration metric: took 14.964281685s to wait for elevateKubeSystemPrivileges.
	I1002 11:51:44.448011 2527262 kubeadm.go:406] StartCluster complete in 37.899583083s
	I1002 11:51:44.448028 2527262 settings.go:142] acquiring lock: {Name:mkcc97fc5770241202468070273c0755324bf4b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:44.448130 2527262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:51:44.449049 2527262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/kubeconfig: {Name:mkf500c5450045c9557e34c3a61a2f3f38c10ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:51:44.449854 2527262 kapi.go:59] client config for ingress-addon-legacy-999051: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:51:44.451295 2527262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:51:44.451553 2527262 config.go:182] Loaded profile config "ingress-addon-legacy-999051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 11:51:44.451586 2527262 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:51:44.451642 2527262 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-999051"
	I1002 11:51:44.451656 2527262 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-999051"
	I1002 11:51:44.451710 2527262 host.go:66] Checking if "ingress-addon-legacy-999051" exists ...
	I1002 11:51:44.452156 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:51:44.452869 2527262 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 11:51:44.452910 2527262 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-999051"
	I1002 11:51:44.452926 2527262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-999051"
	I1002 11:51:44.453211 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:51:44.504674 2527262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:51:44.506579 2527262 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:51:44.506602 2527262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:51:44.506671 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:51:44.530767 2527262 kapi.go:59] client config for ingress-addon-legacy-999051: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:51:44.531036 2527262 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-999051"
	I1002 11:51:44.531072 2527262 host.go:66] Checking if "ingress-addon-legacy-999051" exists ...
	I1002 11:51:44.531560 2527262 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-999051 --format={{.State.Status}}
	I1002 11:51:44.539464 2527262 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-999051" context rescaled to 1 replicas
	I1002 11:51:44.539503 2527262 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:51:44.543276 2527262 out.go:177] * Verifying Kubernetes components...
	I1002 11:51:44.550209 2527262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:51:44.577826 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:51:44.578951 2527262 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:51:44.578967 2527262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:51:44.579032 2527262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-999051
	I1002 11:51:44.601755 2527262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35887 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/ingress-addon-legacy-999051/id_rsa Username:docker}
	I1002 11:51:44.660214 2527262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:51:44.660925 2527262 kapi.go:59] client config for ingress-addon-legacy-999051: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:51:44.661197 2527262 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-999051" to be "Ready" ...
	I1002 11:51:44.807841 2527262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:51:44.811627 2527262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:51:45.293904 2527262 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 11:51:45.493686 2527262 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1002 11:51:45.497631 2527262 addons.go:502] enable addons completed in 1.046031513s: enabled=[default-storageclass storage-provisioner]
	I1002 11:51:46.686845 2527262 node_ready.go:58] node "ingress-addon-legacy-999051" has status "Ready":"False"
	I1002 11:51:49.182142 2527262 node_ready.go:58] node "ingress-addon-legacy-999051" has status "Ready":"False"
	I1002 11:51:51.182678 2527262 node_ready.go:58] node "ingress-addon-legacy-999051" has status "Ready":"False"
	I1002 11:51:52.681977 2527262 node_ready.go:49] node "ingress-addon-legacy-999051" has status "Ready":"True"
	I1002 11:51:52.682003 2527262 node_ready.go:38] duration metric: took 8.020789414s waiting for node "ingress-addon-legacy-999051" to be "Ready" ...
	I1002 11:51:52.682014 2527262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:51:52.689619 2527262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:54.698371 2527262 pod_ready.go:102] pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:51:45 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 11:51:56.700477 2527262 pod_ready.go:102] pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:51:58.700820 2527262 pod_ready.go:102] pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:51:59.200837 2527262 pod_ready.go:92] pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.200867 2527262 pod_ready.go:81] duration metric: took 6.511161442s waiting for pod "coredns-66bff467f8-jfdxw" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.200878 2527262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.206273 2527262 pod_ready.go:92] pod "etcd-ingress-addon-legacy-999051" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.206297 2527262 pod_ready.go:81] duration metric: took 5.410984ms waiting for pod "etcd-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.206314 2527262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.211393 2527262 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-999051" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.211422 2527262 pod_ready.go:81] duration metric: took 5.099993ms waiting for pod "kube-apiserver-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.211434 2527262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.216424 2527262 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-999051" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.216458 2527262 pod_ready.go:81] duration metric: took 5.015152ms waiting for pod "kube-controller-manager-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.216471 2527262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7fqqr" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.221638 2527262 pod_ready.go:92] pod "kube-proxy-7fqqr" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.221665 2527262 pod_ready.go:81] duration metric: took 5.186918ms waiting for pod "kube-proxy-7fqqr" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.221677 2527262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.396108 2527262 request.go:629] Waited for 174.319849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-999051
	I1002 11:51:59.596159 2527262 request.go:629] Waited for 197.336384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-999051
	I1002 11:51:59.599016 2527262 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-999051" in "kube-system" namespace has status "Ready":"True"
	I1002 11:51:59.599044 2527262 pod_ready.go:81] duration metric: took 377.357827ms waiting for pod "kube-scheduler-ingress-addon-legacy-999051" in "kube-system" namespace to be "Ready" ...
	I1002 11:51:59.599058 2527262 pod_ready.go:38] duration metric: took 6.917028289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:51:59.599075 2527262 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:51:59.599142 2527262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:51:59.612813 2527262 api_server.go:72] duration metric: took 15.073267355s to wait for apiserver process to appear ...
	I1002 11:51:59.612838 2527262 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:51:59.612855 2527262 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 11:51:59.621995 2527262 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 11:51:59.623015 2527262 api_server.go:141] control plane version: v1.18.20
	I1002 11:51:59.623043 2527262 api_server.go:131] duration metric: took 10.19741ms to wait for apiserver health ...
	I1002 11:51:59.623053 2527262 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:51:59.795451 2527262 request.go:629] Waited for 172.330267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 11:51:59.801651 2527262 system_pods.go:59] 8 kube-system pods found
	I1002 11:51:59.801692 2527262 system_pods.go:61] "coredns-66bff467f8-jfdxw" [631d3c19-8a7f-47f0-a4c6-c285b023c842] Running
	I1002 11:51:59.801699 2527262 system_pods.go:61] "etcd-ingress-addon-legacy-999051" [603009bf-839a-48a9-9a57-342dfbf03276] Running
	I1002 11:51:59.801705 2527262 system_pods.go:61] "kindnet-wvpwx" [bc3bda54-dcbb-4af4-b1c5-e73e74bef785] Running
	I1002 11:51:59.801710 2527262 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-999051" [70bb51c6-2c9b-44bc-afbc-45be7d1db251] Running
	I1002 11:51:59.801715 2527262 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-999051" [831f5413-2ad7-44ef-aa63-247425563e33] Running
	I1002 11:51:59.801720 2527262 system_pods.go:61] "kube-proxy-7fqqr" [1128b1f9-b6c2-42be-8930-1d31047fb29e] Running
	I1002 11:51:59.801725 2527262 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-999051" [94986871-bf27-4bd9-922a-87e3b3b3da8b] Running
	I1002 11:51:59.801730 2527262 system_pods.go:61] "storage-provisioner" [2bb7903b-c549-4217-9eb7-d9899eaab45f] Running
	I1002 11:51:59.801746 2527262 system_pods.go:74] duration metric: took 178.683054ms to wait for pod list to return data ...
	I1002 11:51:59.801760 2527262 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:51:59.996162 2527262 request.go:629] Waited for 194.330325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 11:51:59.998699 2527262 default_sa.go:45] found service account: "default"
	I1002 11:51:59.998733 2527262 default_sa.go:55] duration metric: took 196.96602ms for default service account to be created ...
	I1002 11:51:59.998744 2527262 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:52:00.195147 2527262 request.go:629] Waited for 196.320915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 11:52:00.205260 2527262 system_pods.go:86] 8 kube-system pods found
	I1002 11:52:00.206766 2527262 system_pods.go:89] "coredns-66bff467f8-jfdxw" [631d3c19-8a7f-47f0-a4c6-c285b023c842] Running
	I1002 11:52:00.206846 2527262 system_pods.go:89] "etcd-ingress-addon-legacy-999051" [603009bf-839a-48a9-9a57-342dfbf03276] Running
	I1002 11:52:00.212645 2527262 system_pods.go:89] "kindnet-wvpwx" [bc3bda54-dcbb-4af4-b1c5-e73e74bef785] Running
	I1002 11:52:00.212679 2527262 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-999051" [70bb51c6-2c9b-44bc-afbc-45be7d1db251] Running
	I1002 11:52:00.212689 2527262 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-999051" [831f5413-2ad7-44ef-aa63-247425563e33] Running
	I1002 11:52:00.212696 2527262 system_pods.go:89] "kube-proxy-7fqqr" [1128b1f9-b6c2-42be-8930-1d31047fb29e] Running
	I1002 11:52:00.212727 2527262 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-999051" [94986871-bf27-4bd9-922a-87e3b3b3da8b] Running
	I1002 11:52:00.212776 2527262 system_pods.go:89] "storage-provisioner" [2bb7903b-c549-4217-9eb7-d9899eaab45f] Running
	I1002 11:52:00.212787 2527262 system_pods.go:126] duration metric: took 214.035471ms to wait for k8s-apps to be running ...
	I1002 11:52:00.212796 2527262 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:52:00.212891 2527262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:52:00.237014 2527262 system_svc.go:56] duration metric: took 24.20344ms WaitForService to wait for kubelet.
	I1002 11:52:00.237056 2527262 kubeadm.go:581] duration metric: took 15.697523983s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:52:00.237081 2527262 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:52:00.395507 2527262 request.go:629] Waited for 158.342412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1002 11:52:00.399028 2527262 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 11:52:00.399061 2527262 node_conditions.go:123] node cpu capacity is 2
	I1002 11:52:00.399074 2527262 node_conditions.go:105] duration metric: took 161.98756ms to run NodePressure ...
	I1002 11:52:00.399105 2527262 start.go:228] waiting for startup goroutines ...
	I1002 11:52:00.399121 2527262 start.go:233] waiting for cluster config update ...
	I1002 11:52:00.399132 2527262 start.go:242] writing updated cluster config ...
	I1002 11:52:00.399468 2527262 ssh_runner.go:195] Run: rm -f paused
	I1002 11:52:00.475057 2527262 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1002 11:52:00.477789 2527262 out.go:177] 
	W1002 11:52:00.480223 2527262 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1002 11:52:00.482552 2527262 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1002 11:52:00.484732 2527262 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-999051" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 11:55:04 ingress-addon-legacy-999051 conmon[3613]: conmon dd01d77c5475c20a4757 <ninfo>: container 3624 exited with status 1
	Oct 02 11:55:04 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:04.634868950Z" level=info msg="Started container" PID=3624 containerID=dd01d77c5475c20a4757af94ab860efdc736b8aaf3644270d5ecf097f9e4c088 description=default/hello-world-app-5f5d8b66bb-h8mqx/hello-world-app id=da5b2d0c-d48e-43be-9b0c-d827f71f30b5 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=09ab0bbf33786bc61e0e7641cf3ed23ecd8a478ed1b2edb44f2e1537789f7004
	Oct 02 11:55:04 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:04.958268261Z" level=info msg="Removing container: a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744" id=f619b2a1-b64f-403a-aabd-b730de8b596c name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 02 11:55:04 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:04.986714011Z" level=info msg="Removed container a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744: default/hello-world-app-5f5d8b66bb-h8mqx/hello-world-app" id=f619b2a1-b64f-403a-aabd-b730de8b596c name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 02 11:55:04 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:04.995761111Z" level=info msg="Stopping pod sandbox: cd02c602e1e3370c9d7d11e5d18dcae47b9a3e37671a7de3530e9de70f7f1fb0" id=2fadcb0f-8967-4b60-83f7-65564172702a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:04 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:04.995815002Z" level=info msg="Stopped pod sandbox (already stopped): cd02c602e1e3370c9d7d11e5d18dcae47b9a3e37671a7de3530e9de70f7f1fb0" id=2fadcb0f-8967-4b60-83f7-65564172702a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:05 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:05.925556978Z" level=info msg="Stopping container: be1d5cf59bf80b0e4d6a0cd6a5b0d88fb067b216a16ef57234f972811bff8fcf (timeout: 2s)" id=089968a6-c537-4f18-ac39-90d5eecf66c1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 11:55:05 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:05.934201435Z" level=info msg="Stopping container: be1d5cf59bf80b0e4d6a0cd6a5b0d88fb067b216a16ef57234f972811bff8fcf (timeout: 2s)" id=568e0e37-737e-4864-9b82-235f772187ee name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 11:55:06 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:06.511399937Z" level=info msg="Stopping pod sandbox: cd02c602e1e3370c9d7d11e5d18dcae47b9a3e37671a7de3530e9de70f7f1fb0" id=10c3d075-bb8d-408c-bc16-6673fc3eeee5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:06 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:06.511448905Z" level=info msg="Stopped pod sandbox (already stopped): cd02c602e1e3370c9d7d11e5d18dcae47b9a3e37671a7de3530e9de70f7f1fb0" id=10c3d075-bb8d-408c-bc16-6673fc3eeee5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:07 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:07.945739759Z" level=warning msg="Stopping container be1d5cf59bf80b0e4d6a0cd6a5b0d88fb067b216a16ef57234f972811bff8fcf with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=089968a6-c537-4f18-ac39-90d5eecf66c1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 11:55:08 ingress-addon-legacy-999051 conmon[2711]: conmon be1d5cf59bf80b0e4d6a <ninfo>: container 2722 exited with status 137
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.134805201Z" level=info msg="Stopped container be1d5cf59bf80b0e4d6a0cd6a5b0d88fb067b216a16ef57234f972811bff8fcf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-cn5zx/controller" id=568e0e37-737e-4864-9b82-235f772187ee name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.136870860Z" level=info msg="Stopped container be1d5cf59bf80b0e4d6a0cd6a5b0d88fb067b216a16ef57234f972811bff8fcf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-cn5zx/controller" id=089968a6-c537-4f18-ac39-90d5eecf66c1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.137554348Z" level=info msg="Stopping pod sandbox: 553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210" id=915fd985-a869-45ef-97d3-89800d9fdcb2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.141337893Z" level=info msg="Stopping pod sandbox: 553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210" id=05b9348f-3c53-4200-96e5-530fa7dca213 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.141536671Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-LT563XFU5ZTLYQJH - [0:0]\n:KUBE-HP-KAZJIRX6PQCYGNID - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-KAZJIRX6PQCYGNID\n-X KUBE-HP-LT563XFU5ZTLYQJH\nCOMMIT\n"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.146058005Z" level=info msg="Closing host port tcp:80"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.146113176Z" level=info msg="Closing host port tcp:443"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.147436386Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.147470642Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.147633670Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-cn5zx Namespace:ingress-nginx ID:553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210 UID:7b88b671-d0bc-4e99-b501-d96952ba2aa4 NetNS:/var/run/netns/225ebd27-fcb2-4d24-87e4-db82240a4da3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.147781190Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-cn5zx from CNI network \"kindnet\" (type=ptp)"
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.174344149Z" level=info msg="Stopped pod sandbox: 553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210" id=915fd985-a869-45ef-97d3-89800d9fdcb2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 11:55:08 ingress-addon-legacy-999051 crio[895]: time="2023-10-02 11:55:08.174471633Z" level=info msg="Stopped pod sandbox (already stopped): 553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210" id=05b9348f-3c53-4200-96e5-530fa7dca213 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd01d77c5475c       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                   9 seconds ago       Exited              hello-world-app           2                   09ab0bbf33786       hello-world-app-5f5d8b66bb-h8mqx
	b338c479a90aa       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                    2 minutes ago       Running             nginx                     0                   89b4949b83dfb       nginx
	be1d5cf59bf80       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   553ca2d82f443       ingress-nginx-controller-7fcf777cb7-cn5zx
	cbc1f76262633       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   625e0774732e6       ingress-nginx-admission-patch-zfk7x
	5f87c03c3ed20       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   6be2683dcd7df       ingress-nginx-admission-create-k86pw
	c3e19849f1090       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   a8fd9084b7e64       storage-provisioner
	a0e8dc36b40e1       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   e54ccef9e2db4       coredns-66bff467f8-jfdxw
	c41cdb4039128       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   6e614c7b3216a       kindnet-wvpwx
	bfb5bff2371e6       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   c8cb6b5b6d85e       kube-proxy-7fqqr
	e869194912169       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   d78e51cf70041       kube-apiserver-ingress-addon-legacy-999051
	6133a4e76d0de       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   03fbbb50ac04b       etcd-ingress-addon-legacy-999051
	a4009d91a8041       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   536ce0cca6a81       kube-scheduler-ingress-addon-legacy-999051
	0342a6cdd7730       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   2c7f78fcf9cb7       kube-controller-manager-ingress-addon-legacy-999051
	
	* 
	* ==> coredns [a0e8dc36b40e1ecb1ab33dbd8999476b4bee72b1b07298e276e59befe7614ff0] <==
	* [INFO] 10.244.0.5:36233 - 14685 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034281s
	[INFO] 10.244.0.5:36233 - 47503 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001401265s
	[INFO] 10.244.0.5:50360 - 29211 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002792856s
	[INFO] 10.244.0.5:36233 - 56066 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001743001s
	[INFO] 10.244.0.5:50360 - 28516 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001805426s
	[INFO] 10.244.0.5:36233 - 37157 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124817s
	[INFO] 10.244.0.5:50360 - 24651 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000182499s
	[INFO] 10.244.0.5:53070 - 52041 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088599s
	[INFO] 10.244.0.5:39895 - 61507 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004183s
	[INFO] 10.244.0.5:39895 - 65367 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042125s
	[INFO] 10.244.0.5:39895 - 10637 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039286s
	[INFO] 10.244.0.5:39895 - 51860 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037579s
	[INFO] 10.244.0.5:39895 - 11339 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034552s
	[INFO] 10.244.0.5:39895 - 52542 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036824s
	[INFO] 10.244.0.5:53070 - 516 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047048s
	[INFO] 10.244.0.5:53070 - 14666 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037399s
	[INFO] 10.244.0.5:53070 - 18926 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057903s
	[INFO] 10.244.0.5:39895 - 23360 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001286778s
	[INFO] 10.244.0.5:53070 - 20980 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058232s
	[INFO] 10.244.0.5:53070 - 27625 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038532s
	[INFO] 10.244.0.5:39895 - 49373 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001122816s
	[INFO] 10.244.0.5:39895 - 31391 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051897s
	[INFO] 10.244.0.5:53070 - 44489 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00105203s
	[INFO] 10.244.0.5:53070 - 42311 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000878934s
	[INFO] 10.244.0.5:53070 - 47750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052702s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-999051
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-999051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=ingress-addon-legacy-999051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_51_29_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:51:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-999051
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:55:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:55:02 +0000   Mon, 02 Oct 2023 11:51:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:55:02 +0000   Mon, 02 Oct 2023 11:51:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:55:02 +0000   Mon, 02 Oct 2023 11:51:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:55:02 +0000   Mon, 02 Oct 2023 11:51:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-999051
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1067f6c3340a462ba917c40d24207fa5
	  System UUID:                97c0a91b-fb68-4919-9616-2745ea2e2e1a
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-h8mqx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-jfdxw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-999051                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-wvpwx                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-999051             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-999051    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-7fqqr                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-999051             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m57s (x5 over 3m58s)  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x5 over 3m58s)  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x4 over 3m58s)  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-999051 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-999051 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001115] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001126] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +0.003366] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=000000b9 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000389fe983
	[  +0.001046] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=000000006a94d035
	[  +0.001082] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +2.104034] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=000000000fb754f7
	[  +0.001029] FS-Cache: O-key=[8] 'b2495c0100000000'
	[  +0.000775] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001095] FS-Cache: N-key=[8] 'b2495c0100000000'
	[  +0.359690] FS-Cache: Duplicate cookie detected
	[  +0.000696] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000bde6dc72
	[  +0.001094] FS-Cache: O-key=[8] 'b8495c0100000000'
	[  +0.000774] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000e33aca47
	[  +0.001035] FS-Cache: N-key=[8] 'b8495c0100000000'
	
	* 
	* ==> etcd [6133a4e76d0ded47b95fa2edbf70f91aaf66c1614aef1aae28511462224aa901] <==
	* raft2023/10/02 11:51:20 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/02 11:51:20 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/02 11:51:20 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/02 11:51:20 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-02 11:51:20.420991 W | auth: simple token is not cryptographically signed
	2023-10-02 11:51:20.476956 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-02 11:51:20.701535 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/02 11:51:20 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-02 11:51:20.702397 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-02 11:51:20.702985 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-02 11:51:20.703048 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 11:51:20.703321 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/02 11:51:21 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/02 11:51:21 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/02 11:51:21 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/02 11:51:21 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/02 11:51:21 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-02 11:51:21.668971 I | etcdserver: published {Name:ingress-addon-legacy-999051 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-02 11:51:21.669236 I | embed: ready to serve client requests
	2023-10-02 11:51:21.680558 I | embed: ready to serve client requests
	2023-10-02 11:51:21.696541 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-02 11:51:21.740575 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 11:51:21.764570 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-02 11:51:21.773526 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-02 11:51:21.848692 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  11:55:14 up 19:37,  0 users,  load average: 0.27, 0.99, 1.67
	Linux ingress-addon-legacy-999051 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c41cdb40391284e0915d04abdc9fd45717ac0c81c180aa9027770fd5b83e363d] <==
	* I1002 11:53:07.947507       1 main.go:227] handling current node
	I1002 11:53:17.956843       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:53:17.956874       1 main.go:227] handling current node
	I1002 11:53:27.965817       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:53:27.965848       1 main.go:227] handling current node
	I1002 11:53:37.974358       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:53:37.974389       1 main.go:227] handling current node
	I1002 11:53:47.978203       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:53:47.978240       1 main.go:227] handling current node
	I1002 11:53:57.987224       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:53:57.987256       1 main.go:227] handling current node
	I1002 11:54:07.997732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:07.997762       1 main.go:227] handling current node
	I1002 11:54:18.010011       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:18.010160       1 main.go:227] handling current node
	I1002 11:54:28.020681       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:28.020711       1 main.go:227] handling current node
	I1002 11:54:38.031995       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:38.032026       1 main.go:227] handling current node
	I1002 11:54:48.037466       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:48.037500       1 main.go:227] handling current node
	I1002 11:54:58.041261       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:54:58.041290       1 main.go:227] handling current node
	I1002 11:55:08.052003       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 11:55:08.052033       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e86919491216930b1fc391fd41f7c8e3dc753ce6d6bf346b776cc51910efb8f6] <==
	* I1002 11:51:25.910506       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1002 11:51:25.910726       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 11:51:25.910763       1 cache.go:39] Caches are synced for autoregister controller
	I1002 11:51:25.911086       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1002 11:51:25.911116       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 11:51:26.692934       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 11:51:26.693081       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 11:51:26.699142       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1002 11:51:26.705328       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1002 11:51:26.705354       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1002 11:51:27.149003       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:51:27.191600       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 11:51:27.253419       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 11:51:27.254468       1 controller.go:609] quota admission added evaluator for: endpoints
	I1002 11:51:27.258980       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 11:51:28.134241       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1002 11:51:28.882878       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1002 11:51:28.997904       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1002 11:51:32.332275       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 11:51:44.701968       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1002 11:51:44.739301       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1002 11:52:01.375103       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1002 11:52:26.809245       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1002 11:55:04.997512       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x4006c3bb70), encoder:(*versioning.codec)(0x400cfbce60), buf:(*bytes.Buffer)(0x400f094f30)})
	E1002 11:55:05.946705       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [0342a6cdd7730d54a82fa13500e81f8b433c4d38c7c8ec254f79bb2b7cb50116] <==
	* W1002 11:51:44.757695       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-999051. Assuming now as a timestamp.
	I1002 11:51:44.757828       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1002 11:51:44.758026       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1002 11:51:44.758795       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-999051", UID:"71171af3-2ed4-4b10-b962-dd4ca9b66e30", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-999051 event: Registered Node ingress-addon-legacy-999051 in Controller
	I1002 11:51:44.792254       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 11:51:44.818108       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 11:51:44.818241       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 11:51:44.818287       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 11:51:44.988506       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a6642020-49c7-4b51-adf3-f81651c4301d", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-wvpwx
	I1002 11:51:44.989612       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5995c1bb-2743-45e5-adf9-ae356d717478", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1002 11:51:44.990156       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"01b9788e-fb94-4e2f-81c5-a2fe12fd8e59", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-jfdxw
	I1002 11:51:45.104725       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2a33108d-afdc-4962-ae73-af8638119315", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-7fqqr
	I1002 11:51:45.159369       1 request.go:621] Throttling request took 1.013459349s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I1002 11:51:45.610711       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1002 11:51:45.610769       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 11:51:54.758355       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1002 11:52:01.364179       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b92e5ede-8e3f-48f5-bb7f-21087df603ce", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1002 11:52:01.388785       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9a82e7e2-9cb6-49da-aa02-1f42ab1e4d8e", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cn5zx
	I1002 11:52:01.403937       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b5cf8aab-938e-4f6b-9a1c-b110db65885b", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-k86pw
	I1002 11:52:01.435028       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4c4082e2-913b-464b-a7ed-1835c21e49df", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zfk7x
	I1002 11:52:05.603716       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4c4082e2-913b-464b-a7ed-1835c21e49df", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 11:52:05.619374       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b5cf8aab-938e-4f6b-9a1c-b110db65885b", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 11:54:47.219432       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"be59957c-d5dc-4d9c-adcc-75fe194f40a8", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1002 11:54:47.265412       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"b5ef2479-2ff2-44d9-b4b9-3ed287abd728", APIVersion:"apps/v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-h8mqx
	E1002 11:55:10.714850       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-d2tpw" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [bfb5bff2371e67c31bf1bf2d7437e61c18e072a01291e1e9ffbbed01e8297052] <==
	* W1002 11:51:45.875298       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1002 11:51:45.888289       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1002 11:51:45.888404       1 server_others.go:186] Using iptables Proxier.
	I1002 11:51:45.890243       1 server.go:583] Version: v1.18.20
	I1002 11:51:45.891137       1 config.go:315] Starting service config controller
	I1002 11:51:45.891178       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1002 11:51:45.891836       1 config.go:133] Starting endpoints config controller
	I1002 11:51:45.891885       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1002 11:51:45.991752       1 shared_informer.go:230] Caches are synced for service config 
	I1002 11:51:45.992103       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a4009d91a8041334edf5c86e72e56499215ab6f460b3f14f29eedc20808c5730] <==
	* W1002 11:51:25.809378       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:51:25.914902       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 11:51:25.914997       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 11:51:25.917135       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1002 11:51:25.917500       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:51:25.920576       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:51:25.920701       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 11:51:25.943337       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:51:25.943574       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:51:25.943732       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:51:25.943869       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:51:25.943998       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:51:25.944124       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:51:25.944403       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:51:25.944553       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:51:25.944653       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:51:25.956824       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:51:25.956918       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 11:51:25.957001       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:51:26.776100       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:51:26.871088       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:51:27.176957       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 11:51:29.820826       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1002 11:51:45.380753       1 factory.go:503] pod kube-system/coredns-66bff467f8-jfdxw is already present in the backoff queue
	E1002 11:51:45.504457       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 02 11:54:50 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:54:50.513853    1630 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 02 11:54:50 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:54:50.513910    1630 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 02 11:54:50 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:54:50.513947    1630 pod_workers.go:191] Error syncing pod cef15813-aba1-49dd-932f-d1800671f6b5 ("kube-ingress-dns-minikube_kube-system(cef15813-aba1-49dd-932f-d1800671f6b5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 02 11:54:50 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:54:50.931065    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c0032ff37869a8a674bd820408305bf6f47c8ac863cff2f6e013f74fa289024c
	Oct 02 11:54:51 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:54:51.934200    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c0032ff37869a8a674bd820408305bf6f47c8ac863cff2f6e013f74fa289024c
	Oct 02 11:54:51 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:54:51.934311    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744
	Oct 02 11:54:51 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:54:51.934567    1630 pod_workers.go:191] Error syncing pod 97911a01-9aa0-456d-99ae-adebb229ca37 ("hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"
	Oct 02 11:54:52 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:54:52.937028    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744
	Oct 02 11:54:52 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:54:52.937273    1630 pod_workers.go:191] Error syncing pod 97911a01-9aa0-456d-99ae-adebb229ca37 ("hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"
	Oct 02 11:55:03 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:03.379585    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-cv8fw" (UniqueName: "kubernetes.io/secret/cef15813-aba1-49dd-932f-d1800671f6b5-minikube-ingress-dns-token-cv8fw") pod "cef15813-aba1-49dd-932f-d1800671f6b5" (UID: "cef15813-aba1-49dd-932f-d1800671f6b5")
	Oct 02 11:55:03 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:03.384177    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef15813-aba1-49dd-932f-d1800671f6b5-minikube-ingress-dns-token-cv8fw" (OuterVolumeSpecName: "minikube-ingress-dns-token-cv8fw") pod "cef15813-aba1-49dd-932f-d1800671f6b5" (UID: "cef15813-aba1-49dd-932f-d1800671f6b5"). InnerVolumeSpecName "minikube-ingress-dns-token-cv8fw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 11:55:03 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:03.479987    1630 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-cv8fw" (UniqueName: "kubernetes.io/secret/cef15813-aba1-49dd-932f-d1800671f6b5-minikube-ingress-dns-token-cv8fw") on node "ingress-addon-legacy-999051" DevicePath ""
	Oct 02 11:55:04 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:04.511402    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744
	Oct 02 11:55:04 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:04.955826    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4cda6e80b5fc7f70f7a55051a862f9492bc27c74b41da3419e09dbc7b8ae744
	Oct 02 11:55:04 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:04.956119    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dd01d77c5475c20a4757af94ab860efdc736b8aaf3644270d5ecf097f9e4c088
	Oct 02 11:55:04 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:55:04.956441    1630 pod_workers.go:191] Error syncing pod 97911a01-9aa0-456d-99ae-adebb229ca37 ("hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-h8mqx_default(97911a01-9aa0-456d-99ae-adebb229ca37)"
	Oct 02 11:55:05 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:55:05.929213    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cn5zx.178a484d2aba213d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cn5zx", UID:"7b88b671-d0bc-4e99-b501-d96952ba2aa4", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-999051"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec9c67721673d, ext:217098135651, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec9c67721673d, ext:217098135651, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cn5zx.178a484d2aba213d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 11:55:05 ingress-addon-legacy-999051 kubelet[1630]: E1002 11:55:05.938702    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cn5zx.178a484d2aba213d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cn5zx", UID:"7b88b671-d0bc-4e99-b501-d96952ba2aa4", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-999051"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec9c67721673d, ext:217098135651, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec9c677a5be58, ext:217106808703, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cn5zx.178a484d2aba213d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 11:55:08 ingress-addon-legacy-999051 kubelet[1630]: W1002 11:55:08.967146    1630 pod_container_deletor.go:77] Container "553ca2d82f443f7371324431326c6025cb110e5e0bcaae5d250bdbd61a69b210" not found in pod's containers
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.096982    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-webhook-cert") pod "7b88b671-d0bc-4e99-b501-d96952ba2aa4" (UID: "7b88b671-d0bc-4e99-b501-d96952ba2aa4")
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.097069    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sw7zc" (UniqueName: "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-ingress-nginx-token-sw7zc") pod "7b88b671-d0bc-4e99-b501-d96952ba2aa4" (UID: "7b88b671-d0bc-4e99-b501-d96952ba2aa4")
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.105684    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7b88b671-d0bc-4e99-b501-d96952ba2aa4" (UID: "7b88b671-d0bc-4e99-b501-d96952ba2aa4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.106397    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-ingress-nginx-token-sw7zc" (OuterVolumeSpecName: "ingress-nginx-token-sw7zc") pod "7b88b671-d0bc-4e99-b501-d96952ba2aa4" (UID: "7b88b671-d0bc-4e99-b501-d96952ba2aa4"). InnerVolumeSpecName "ingress-nginx-token-sw7zc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.197438    1630 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sw7zc" (UniqueName: "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-ingress-nginx-token-sw7zc") on node "ingress-addon-legacy-999051" DevicePath ""
	Oct 02 11:55:10 ingress-addon-legacy-999051 kubelet[1630]: I1002 11:55:10.197495    1630 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7b88b671-d0bc-4e99-b501-d96952ba2aa4-webhook-cert") on node "ingress-addon-legacy-999051" DevicePath ""
	
	* 
	* ==> storage-provisioner [c3e19849f1090050ab7b6e762b73f610729b52cfb95ea6ebaf012780c675f74b] <==
	* I1002 11:51:58.246552       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:51:58.259992       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:51:58.260073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:51:58.267829       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:51:58.268415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01da93ff-a91b-47d4-9c7b-d6ef458b1f7c", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-999051_1ccaf9c8-08a7-401e-bcc6-0c35d75de392 became leader
	I1002 11:51:58.268553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-999051_1ccaf9c8-08a7-401e-bcc6-0c35d75de392!
	I1002 11:51:58.369014       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-999051_1ccaf9c8-08a7-401e-bcc6-0c35d75de392!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-999051 -n ingress-addon-legacy-999051
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-999051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- sh -c "ping -c 1 192.168.58.1": exit status 1 (255.716249ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-4tnjh): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- sh -c "ping -c 1 192.168.58.1": exit status 1 (259.154334ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wmx6q): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-361100
helpers_test.go:235: (dbg) docker inspect multinode-361100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166",
	        "Created": "2023-10-02T12:01:47.510108998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2564002,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T12:01:47.836460543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/hostname",
	        "HostsPath": "/var/lib/docker/containers/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/hosts",
	        "LogPath": "/var/lib/docker/containers/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166-json.log",
	        "Name": "/multinode-361100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-361100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-361100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0f7c181ecb853760d0ee71338ebb12d3605671c3cac256d49cf9bb5c0d6be4ed-init/diff:/var/lib/docker/overlay2/1ffc828a09df1e9fa25f5092ba7b162a0fa5a6fe031a41b1f614792625eb1522/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f7c181ecb853760d0ee71338ebb12d3605671c3cac256d49cf9bb5c0d6be4ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f7c181ecb853760d0ee71338ebb12d3605671c3cac256d49cf9bb5c0d6be4ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f7c181ecb853760d0ee71338ebb12d3605671c3cac256d49cf9bb5c0d6be4ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-361100",
	                "Source": "/var/lib/docker/volumes/multinode-361100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-361100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-361100",
	                "name.minikube.sigs.k8s.io": "multinode-361100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57c4006afee1c293686e8503db751ba33be7e5d2c8c94bc9958d5b6f09f6200a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35946"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35943"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35944"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/57c4006afee1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-361100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "506dd6922a98",
	                        "multinode-361100"
	                    ],
	                    "NetworkID": "1637dc8803f84bfa8a480c3703686e27d2169f165f13aa28ad455b5212dabab2",
	                    "EndpointID": "b6ff4f9034acf0cc9fe9b8c803b1466cc2f58657d006ff4e071a5fd77559374d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-361100 -n multinode-361100
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-361100 logs -n 25: (1.699698051s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-164003                           | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-164003 ssh -- ls                    | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-162050                           | mount-start-1-162050 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-164003 ssh -- ls                    | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-164003                           | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	| start   | -p mount-start-2-164003                           | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	| ssh     | mount-start-2-164003 ssh -- ls                    | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-164003                           | mount-start-2-164003 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	| delete  | -p mount-start-1-162050                           | mount-start-1-162050 | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:01 UTC |
	| start   | -p multinode-361100                               | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:01 UTC | 02 Oct 23 12:03 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- apply -f                   | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- rollout                    | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- get pods -o                | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- get pods -o                | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-4tnjh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-wmx6q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-4tnjh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-wmx6q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-4tnjh -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-wmx6q -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- get pods -o                | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-4tnjh                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC |                     |
	|         | busybox-5bc68d56bd-4tnjh -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC | 02 Oct 23 12:03 UTC |
	|         | busybox-5bc68d56bd-wmx6q                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-361100 -- exec                       | multinode-361100     | jenkins | v1.31.2 | 02 Oct 23 12:03 UTC |                     |
	|         | busybox-5bc68d56bd-wmx6q -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:01:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:01:42.021693 2563543 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:01:42.021912 2563543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:01:42.021934 2563543 out.go:309] Setting ErrFile to fd 2...
	I1002 12:01:42.021961 2563543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:01:42.022320 2563543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:01:42.022922 2563543 out.go:303] Setting JSON to false
	I1002 12:01:42.024069 2563543 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71048,"bootTime":1696177054,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:01:42.024206 2563543 start.go:138] virtualization:  
	I1002 12:01:42.027960 2563543 out.go:177] * [multinode-361100] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:01:42.030145 2563543 notify.go:220] Checking for updates...
	I1002 12:01:42.031241 2563543 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:01:42.035178 2563543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:01:42.037021 2563543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:01:42.039049 2563543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:01:42.040787 2563543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:01:42.042883 2563543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:01:42.045035 2563543 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:01:42.072043 2563543 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:01:42.072215 2563543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:01:42.165940 2563543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 12:01:42.154305171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:01:42.166057 2563543 docker.go:294] overlay module found
	I1002 12:01:42.169342 2563543 out.go:177] * Using the docker driver based on user configuration
	I1002 12:01:42.171276 2563543 start.go:298] selected driver: docker
	I1002 12:01:42.171305 2563543 start.go:902] validating driver "docker" against <nil>
	I1002 12:01:42.171323 2563543 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:01:42.172220 2563543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:01:42.251683 2563543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 12:01:42.240571877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:01:42.251899 2563543 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 12:01:42.252161 2563543 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 12:01:42.254146 2563543 out.go:177] * Using Docker driver with root privileges
	I1002 12:01:42.256010 2563543 cni.go:84] Creating CNI manager for ""
	I1002 12:01:42.256048 2563543 cni.go:136] 0 nodes found, recommending kindnet
	I1002 12:01:42.256109 2563543 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 12:01:42.256125 2563543 start_flags.go:321] config:
	{Name:multinode-361100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:01:42.258580 2563543 out.go:177] * Starting control plane node multinode-361100 in cluster multinode-361100
	I1002 12:01:42.260639 2563543 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:01:42.262578 2563543 out.go:177] * Pulling base image ...
	I1002 12:01:42.264495 2563543 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:01:42.264553 2563543 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 12:01:42.264579 2563543 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:01:42.264592 2563543 cache.go:57] Caching tarball of preloaded images
	I1002 12:01:42.264686 2563543 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 12:01:42.264697 2563543 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:01:42.265120 2563543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json ...
	I1002 12:01:42.265153 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json: {Name:mk5b43fe6282fc8bc4c60d24a4024d89c5e74164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:42.284105 2563543 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 12:01:42.284137 2563543 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 12:01:42.284166 2563543 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:01:42.284279 2563543 start.go:365] acquiring machines lock for multinode-361100: {Name:mk134d9167acca24e5902775937d62265e090c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:01:42.284438 2563543 start.go:369] acquired machines lock for "multinode-361100" in 135.615µs
	I1002 12:01:42.284477 2563543 start.go:93] Provisioning new machine with config: &{Name:multinode-361100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:01:42.284672 2563543 start.go:125] createHost starting for "" (driver="docker")
	I1002 12:01:42.287101 2563543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 12:01:42.287387 2563543 start.go:159] libmachine.API.Create for "multinode-361100" (driver="docker")
	I1002 12:01:42.287415 2563543 client.go:168] LocalClient.Create starting
	I1002 12:01:42.287497 2563543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem
	I1002 12:01:42.287536 2563543 main.go:141] libmachine: Decoding PEM data...
	I1002 12:01:42.287553 2563543 main.go:141] libmachine: Parsing certificate...
	I1002 12:01:42.287612 2563543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem
	I1002 12:01:42.287634 2563543 main.go:141] libmachine: Decoding PEM data...
	I1002 12:01:42.287645 2563543 main.go:141] libmachine: Parsing certificate...
	I1002 12:01:42.288034 2563543 cli_runner.go:164] Run: docker network inspect multinode-361100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 12:01:42.307030 2563543 cli_runner.go:211] docker network inspect multinode-361100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 12:01:42.307146 2563543 network_create.go:281] running [docker network inspect multinode-361100] to gather additional debugging logs...
	I1002 12:01:42.307182 2563543 cli_runner.go:164] Run: docker network inspect multinode-361100
	W1002 12:01:42.326983 2563543 cli_runner.go:211] docker network inspect multinode-361100 returned with exit code 1
	I1002 12:01:42.327014 2563543 network_create.go:284] error running [docker network inspect multinode-361100]: docker network inspect multinode-361100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-361100 not found
	I1002 12:01:42.327028 2563543 network_create.go:286] output of [docker network inspect multinode-361100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-361100 not found
	
	** /stderr **
	I1002 12:01:42.327102 2563543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:01:42.348005 2563543 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad66715ded82 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d8:bc:22:f4} reservation:<nil>}
	I1002 12:01:42.348630 2563543 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000d30ba0}
	I1002 12:01:42.348666 2563543 network_create.go:123] attempt to create docker network multinode-361100 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1002 12:01:42.348758 2563543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-361100 multinode-361100
	I1002 12:01:42.443124 2563543 network_create.go:107] docker network multinode-361100 192.168.58.0/24 created
	I1002 12:01:42.443157 2563543 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-361100" container
	I1002 12:01:42.443238 2563543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 12:01:42.462211 2563543 cli_runner.go:164] Run: docker volume create multinode-361100 --label name.minikube.sigs.k8s.io=multinode-361100 --label created_by.minikube.sigs.k8s.io=true
	I1002 12:01:42.486120 2563543 oci.go:103] Successfully created a docker volume multinode-361100
	I1002 12:01:42.486205 2563543 cli_runner.go:164] Run: docker run --rm --name multinode-361100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-361100 --entrypoint /usr/bin/test -v multinode-361100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 12:01:43.064112 2563543 oci.go:107] Successfully prepared a docker volume multinode-361100
	I1002 12:01:43.064150 2563543 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:01:43.064173 2563543 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 12:01:43.064262 2563543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-361100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 12:01:47.419178 2563543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-361100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.354868266s)
	I1002 12:01:47.419214 2563543 kic.go:199] duration metric: took 4.355039 seconds to extract preloaded images to volume
	W1002 12:01:47.419375 2563543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 12:01:47.419506 2563543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 12:01:47.487892 2563543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-361100 --name multinode-361100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-361100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-361100 --network multinode-361100 --ip 192.168.58.2 --volume multinode-361100:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 12:01:47.846190 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Running}}
	I1002 12:01:47.879332 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:01:47.905541 2563543 cli_runner.go:164] Run: docker exec multinode-361100 stat /var/lib/dpkg/alternatives/iptables
	I1002 12:01:47.979543 2563543 oci.go:144] the created container "multinode-361100" has a running status.
	I1002 12:01:47.979573 2563543 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa...
	I1002 12:01:48.286999 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 12:01:48.287100 2563543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 12:01:48.314116 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:01:48.334373 2563543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 12:01:48.334393 2563543 kic_runner.go:114] Args: [docker exec --privileged multinode-361100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 12:01:48.429051 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:01:48.457363 2563543 machine.go:88] provisioning docker machine ...
	I1002 12:01:48.457392 2563543 ubuntu.go:169] provisioning hostname "multinode-361100"
	I1002 12:01:48.457466 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:48.490377 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:01:48.490807 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35947 <nil> <nil>}
	I1002 12:01:48.490821 2563543 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-361100 && echo "multinode-361100" | sudo tee /etc/hostname
	I1002 12:01:48.491449 2563543 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57058->127.0.0.1:35947: read: connection reset by peer
	I1002 12:01:51.648906 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-361100
	
	I1002 12:01:51.649000 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:51.670601 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:01:51.671018 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35947 <nil> <nil>}
	I1002 12:01:51.671044 2563543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-361100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-361100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-361100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:01:51.809799 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:01:51.809837 2563543 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:01:51.809868 2563543 ubuntu.go:177] setting up certificates
	I1002 12:01:51.809877 2563543 provision.go:83] configureAuth start
	I1002 12:01:51.809950 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100
	I1002 12:01:51.828392 2563543 provision.go:138] copyHostCerts
	I1002 12:01:51.828439 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:01:51.828474 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:01:51.828488 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:01:51.828625 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:01:51.828720 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:01:51.828742 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:01:51.828747 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:01:51.828782 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:01:51.828836 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:01:51.828855 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:01:51.828860 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:01:51.828889 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:01:51.828945 2563543 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.multinode-361100 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-361100]
	I1002 12:01:52.149064 2563543 provision.go:172] copyRemoteCerts
	I1002 12:01:52.149136 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:01:52.149179 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.168820 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:01:52.268075 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 12:01:52.268142 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:01:52.297924 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 12:01:52.297986 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 12:01:52.327899 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 12:01:52.328012 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:01:52.360225 2563543 provision.go:86] duration metric: configureAuth took 550.308784ms
	I1002 12:01:52.360252 2563543 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:01:52.360454 2563543 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:01:52.360587 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.381156 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:01:52.381654 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35947 <nil> <nil>}
	I1002 12:01:52.381683 2563543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:01:52.640278 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:01:52.640302 2563543 machine.go:91] provisioned docker machine in 4.182920529s
	I1002 12:01:52.640312 2563543 client.go:171] LocalClient.Create took 10.352890396s
	I1002 12:01:52.640324 2563543 start.go:167] duration metric: libmachine.API.Create for "multinode-361100" took 10.352939209s
	I1002 12:01:52.640333 2563543 start.go:300] post-start starting for "multinode-361100" (driver="docker")
	I1002 12:01:52.640343 2563543 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:01:52.640413 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:01:52.640465 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.660215 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:01:52.764450 2563543 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:01:52.768751 2563543 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 12:01:52.768770 2563543 command_runner.go:130] > NAME="Ubuntu"
	I1002 12:01:52.768777 2563543 command_runner.go:130] > VERSION_ID="22.04"
	I1002 12:01:52.768784 2563543 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 12:01:52.768790 2563543 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 12:01:52.768795 2563543 command_runner.go:130] > ID=ubuntu
	I1002 12:01:52.768800 2563543 command_runner.go:130] > ID_LIKE=debian
	I1002 12:01:52.768806 2563543 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 12:01:52.768812 2563543 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 12:01:52.768819 2563543 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 12:01:52.768828 2563543 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 12:01:52.768836 2563543 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 12:01:52.768892 2563543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:01:52.768930 2563543 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:01:52.768945 2563543 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:01:52.768953 2563543 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 12:01:52.768966 2563543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:01:52.769035 2563543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:01:52.769118 2563543 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:01:52.769128 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /etc/ssl/certs/24995982.pem
	I1002 12:01:52.769226 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:01:52.780795 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:01:52.811639 2563543 start.go:303] post-start completed in 171.289225ms
	I1002 12:01:52.812089 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100
	I1002 12:01:52.830221 2563543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json ...
	I1002 12:01:52.830537 2563543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:01:52.830590 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.849401 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:01:52.947117 2563543 command_runner.go:130] > 18%!
	(MISSING)I1002 12:01:52.947234 2563543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:01:52.953024 2563543 command_runner.go:130] > 160G
	I1002 12:01:52.953635 2563543 start.go:128] duration metric: createHost completed in 10.668931209s
	I1002 12:01:52.953654 2563543 start.go:83] releasing machines lock for "multinode-361100", held for 10.669206795s
	I1002 12:01:52.953734 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100
	I1002 12:01:52.972671 2563543 ssh_runner.go:195] Run: cat /version.json
	I1002 12:01:52.972704 2563543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:01:52.972728 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.972761 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:01:52.993769 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:01:52.999723 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:01:53.089202 2563543 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I1002 12:01:53.089335 2563543 ssh_runner.go:195] Run: systemctl --version
	I1002 12:01:53.220949 2563543 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 12:01:53.224368 2563543 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1002 12:01:53.224484 2563543 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 12:01:53.224582 2563543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:01:53.376827 2563543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:01:53.382607 2563543 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 12:01:53.382634 2563543 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1002 12:01:53.382642 2563543 command_runner.go:130] > Device: 36h/54d	Inode: 2868618     Links: 1
	I1002 12:01:53.382649 2563543 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:01:53.382657 2563543 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1002 12:01:53.382664 2563543 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1002 12:01:53.382673 2563543 command_runner.go:130] > Change: 2023-10-02 07:16:55.608351941 +0000
	I1002 12:01:53.382689 2563543 command_runner.go:130] >  Birth: 2023-10-02 07:16:55.608351941 +0000
	I1002 12:01:53.382937 2563543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:01:53.407336 2563543 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:01:53.407419 2563543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:01:53.448432 2563543 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1002 12:01:53.448558 2563543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 12:01:53.448580 2563543 start.go:469] detecting cgroup driver to use...
	I1002 12:01:53.448619 2563543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:01:53.448675 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:01:53.468782 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:01:53.483415 2563543 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:01:53.483514 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:01:53.501189 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:01:53.519046 2563543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:01:53.625272 2563543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:01:53.739555 2563543 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 12:01:53.739586 2563543 docker.go:213] disabling docker service ...
	I1002 12:01:53.739668 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:01:53.763175 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:01:53.778404 2563543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:01:53.888029 2563543 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 12:01:53.888182 2563543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:01:53.996084 2563543 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 12:01:53.996231 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:01:54.013650 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:01:54.035941 2563543 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 12:01:54.037907 2563543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:01:54.038007 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:01:54.054587 2563543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:01:54.054743 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:01:54.069056 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:01:54.083538 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:01:54.097745 2563543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:01:54.110788 2563543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:01:54.121564 2563543 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 12:01:54.123034 2563543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:01:54.134728 2563543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:01:54.229214 2563543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:01:54.371625 2563543 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:01:54.371743 2563543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:01:54.377468 2563543 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 12:01:54.377490 2563543 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 12:01:54.377500 2563543 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1002 12:01:54.377508 2563543 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:01:54.377520 2563543 command_runner.go:130] > Access: 2023-10-02 12:01:54.356349292 +0000
	I1002 12:01:54.377529 2563543 command_runner.go:130] > Modify: 2023-10-02 12:01:54.356349292 +0000
	I1002 12:01:54.377536 2563543 command_runner.go:130] > Change: 2023-10-02 12:01:54.356349292 +0000
	I1002 12:01:54.377541 2563543 command_runner.go:130] >  Birth: -
	I1002 12:01:54.377608 2563543 start.go:537] Will wait 60s for crictl version
	I1002 12:01:54.377661 2563543 ssh_runner.go:195] Run: which crictl
	I1002 12:01:54.382609 2563543 command_runner.go:130] > /usr/bin/crictl
	I1002 12:01:54.382897 2563543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:01:54.424243 2563543 command_runner.go:130] > Version:  0.1.0
	I1002 12:01:54.424550 2563543 command_runner.go:130] > RuntimeName:  cri-o
	I1002 12:01:54.424568 2563543 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1002 12:01:54.424575 2563543 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 12:01:54.427285 2563543 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 12:01:54.427369 2563543 ssh_runner.go:195] Run: crio --version
	I1002 12:01:54.475324 2563543 command_runner.go:130] > crio version 1.24.6
	I1002 12:01:54.475370 2563543 command_runner.go:130] > Version:          1.24.6
	I1002 12:01:54.475380 2563543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 12:01:54.475386 2563543 command_runner.go:130] > GitTreeState:     clean
	I1002 12:01:54.475393 2563543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 12:01:54.475399 2563543 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 12:01:54.475405 2563543 command_runner.go:130] > Compiler:         gc
	I1002 12:01:54.475411 2563543 command_runner.go:130] > Platform:         linux/arm64
	I1002 12:01:54.475417 2563543 command_runner.go:130] > Linkmode:         dynamic
	I1002 12:01:54.475431 2563543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 12:01:54.475445 2563543 command_runner.go:130] > SeccompEnabled:   true
	I1002 12:01:54.475451 2563543 command_runner.go:130] > AppArmorEnabled:  false
	I1002 12:01:54.478134 2563543 ssh_runner.go:195] Run: crio --version
	I1002 12:01:54.527092 2563543 command_runner.go:130] > crio version 1.24.6
	I1002 12:01:54.527159 2563543 command_runner.go:130] > Version:          1.24.6
	I1002 12:01:54.527181 2563543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 12:01:54.527203 2563543 command_runner.go:130] > GitTreeState:     clean
	I1002 12:01:54.527239 2563543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 12:01:54.527265 2563543 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 12:01:54.527288 2563543 command_runner.go:130] > Compiler:         gc
	I1002 12:01:54.527321 2563543 command_runner.go:130] > Platform:         linux/arm64
	I1002 12:01:54.527342 2563543 command_runner.go:130] > Linkmode:         dynamic
	I1002 12:01:54.527375 2563543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 12:01:54.527405 2563543 command_runner.go:130] > SeccompEnabled:   true
	I1002 12:01:54.527428 2563543 command_runner.go:130] > AppArmorEnabled:  false
	I1002 12:01:54.530924 2563543 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 12:01:54.532617 2563543 cli_runner.go:164] Run: docker network inspect multinode-361100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:01:54.550796 2563543 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 12:01:54.555906 2563543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:01:54.570792 2563543 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:01:54.570865 2563543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:01:54.639681 2563543 command_runner.go:130] > {
	I1002 12:01:54.639704 2563543 command_runner.go:130] >   "images": [
	I1002 12:01:54.639709 2563543 command_runner.go:130] >     {
	I1002 12:01:54.639719 2563543 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1002 12:01:54.639727 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.639735 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 12:01:54.639739 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639744 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.639755 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 12:01:54.639764 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1002 12:01:54.639768 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639774 2563543 command_runner.go:130] >       "size": "60867618",
	I1002 12:01:54.639779 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.639783 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.639792 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.639797 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.639801 2563543 command_runner.go:130] >     },
	I1002 12:01:54.639805 2563543 command_runner.go:130] >     {
	I1002 12:01:54.639815 2563543 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1002 12:01:54.639820 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.639826 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 12:01:54.639831 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639836 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.639846 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1002 12:01:54.639855 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1002 12:01:54.639860 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639867 2563543 command_runner.go:130] >       "size": "29037500",
	I1002 12:01:54.639872 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.639877 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.639882 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.639887 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.639891 2563543 command_runner.go:130] >     },
	I1002 12:01:54.639896 2563543 command_runner.go:130] >     {
	I1002 12:01:54.639903 2563543 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1002 12:01:54.639908 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.639914 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 12:01:54.639919 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639924 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.639934 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1002 12:01:54.639943 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1002 12:01:54.639947 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.639952 2563543 command_runner.go:130] >       "size": "51393451",
	I1002 12:01:54.639958 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.639963 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.639968 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.639978 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.639982 2563543 command_runner.go:130] >     },
	I1002 12:01:54.639986 2563543 command_runner.go:130] >     {
	I1002 12:01:54.639994 2563543 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1002 12:01:54.640001 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640007 2563543 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 12:01:54.640011 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640016 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640025 2563543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1002 12:01:54.640035 2563543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1002 12:01:54.640061 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640068 2563543 command_runner.go:130] >       "size": "182203183",
	I1002 12:01:54.640072 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.640077 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.640082 2563543 command_runner.go:130] >       },
	I1002 12:01:54.640086 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640091 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640096 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640101 2563543 command_runner.go:130] >     },
	I1002 12:01:54.640105 2563543 command_runner.go:130] >     {
	I1002 12:01:54.640113 2563543 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1002 12:01:54.640118 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640124 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 12:01:54.640128 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640133 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640143 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1002 12:01:54.640152 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 12:01:54.640156 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640162 2563543 command_runner.go:130] >       "size": "121054158",
	I1002 12:01:54.640167 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.640172 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.640176 2563543 command_runner.go:130] >       },
	I1002 12:01:54.640181 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640186 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640192 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640197 2563543 command_runner.go:130] >     },
	I1002 12:01:54.640201 2563543 command_runner.go:130] >     {
	I1002 12:01:54.640209 2563543 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1002 12:01:54.640214 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640220 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 12:01:54.640225 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640230 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640240 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1002 12:01:54.640249 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1002 12:01:54.640254 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640261 2563543 command_runner.go:130] >       "size": "117187380",
	I1002 12:01:54.640266 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.640270 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.640275 2563543 command_runner.go:130] >       },
	I1002 12:01:54.640280 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640285 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640290 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640294 2563543 command_runner.go:130] >     },
	I1002 12:01:54.640298 2563543 command_runner.go:130] >     {
	I1002 12:01:54.640306 2563543 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1002 12:01:54.640311 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640316 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 12:01:54.640321 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640326 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640335 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1002 12:01:54.640344 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1002 12:01:54.640348 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640353 2563543 command_runner.go:130] >       "size": "69926807",
	I1002 12:01:54.640358 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.640363 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640368 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640373 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640377 2563543 command_runner.go:130] >     },
	I1002 12:01:54.640381 2563543 command_runner.go:130] >     {
	I1002 12:01:54.640389 2563543 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1002 12:01:54.640394 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640400 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 12:01:54.640404 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640409 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640444 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 12:01:54.640453 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1002 12:01:54.640458 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640463 2563543 command_runner.go:130] >       "size": "59188020",
	I1002 12:01:54.640468 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.640473 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.640477 2563543 command_runner.go:130] >       },
	I1002 12:01:54.640483 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640488 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640492 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640497 2563543 command_runner.go:130] >     },
	I1002 12:01:54.640501 2563543 command_runner.go:130] >     {
	I1002 12:01:54.640509 2563543 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1002 12:01:54.640514 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.640540 2563543 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 12:01:54.640545 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640551 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.640559 2563543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1002 12:01:54.640570 2563543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1002 12:01:54.640574 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.640579 2563543 command_runner.go:130] >       "size": "520014",
	I1002 12:01:54.640584 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.640589 2563543 command_runner.go:130] >         "value": "65535"
	I1002 12:01:54.640593 2563543 command_runner.go:130] >       },
	I1002 12:01:54.640598 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.640603 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.640608 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.640612 2563543 command_runner.go:130] >     }
	I1002 12:01:54.640616 2563543 command_runner.go:130] >   ]
	I1002 12:01:54.640621 2563543 command_runner.go:130] > }
	I1002 12:01:54.642597 2563543 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:01:54.642620 2563543 crio.go:415] Images already preloaded, skipping extraction
	I1002 12:01:54.642682 2563543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:01:54.685783 2563543 command_runner.go:130] > {
	I1002 12:01:54.685803 2563543 command_runner.go:130] >   "images": [
	I1002 12:01:54.685809 2563543 command_runner.go:130] >     {
	I1002 12:01:54.685818 2563543 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1002 12:01:54.685824 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.685832 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 12:01:54.685837 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.685842 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.685853 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 12:01:54.685866 2563543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1002 12:01:54.685875 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.685881 2563543 command_runner.go:130] >       "size": "60867618",
	I1002 12:01:54.685889 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.685897 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.685907 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.685914 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.685922 2563543 command_runner.go:130] >     },
	I1002 12:01:54.685927 2563543 command_runner.go:130] >     {
	I1002 12:01:54.685935 2563543 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1002 12:01:54.685940 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.685947 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 12:01:54.685952 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.685957 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.685967 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1002 12:01:54.685983 2563543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1002 12:01:54.685988 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.685995 2563543 command_runner.go:130] >       "size": "29037500",
	I1002 12:01:54.686002 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.686007 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686015 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686020 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686025 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686032 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686039 2563543 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1002 12:01:54.686044 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686051 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 12:01:54.686059 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686065 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686075 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1002 12:01:54.686087 2563543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1002 12:01:54.686092 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686098 2563543 command_runner.go:130] >       "size": "51393451",
	I1002 12:01:54.686105 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.686113 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686120 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686125 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686130 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686134 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686145 2563543 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1002 12:01:54.686152 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686159 2563543 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 12:01:54.686169 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686175 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686186 2563543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1002 12:01:54.686196 2563543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1002 12:01:54.686203 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686213 2563543 command_runner.go:130] >       "size": "182203183",
	I1002 12:01:54.686218 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.686223 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.686228 2563543 command_runner.go:130] >       },
	I1002 12:01:54.686233 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686241 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686248 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686254 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686261 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686269 2563543 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1002 12:01:54.686275 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686285 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 12:01:54.686300 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686308 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686318 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1002 12:01:54.686327 2563543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 12:01:54.686334 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686340 2563543 command_runner.go:130] >       "size": "121054158",
	I1002 12:01:54.686347 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.686353 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.686358 2563543 command_runner.go:130] >       },
	I1002 12:01:54.686367 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686372 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686377 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686384 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686388 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686396 2563543 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1002 12:01:54.686401 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686408 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 12:01:54.686415 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686422 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686435 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1002 12:01:54.686445 2563543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1002 12:01:54.686453 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686458 2563543 command_runner.go:130] >       "size": "117187380",
	I1002 12:01:54.686463 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.686470 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.686475 2563543 command_runner.go:130] >       },
	I1002 12:01:54.686480 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686485 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686490 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686497 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686505 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686513 2563543 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1002 12:01:54.686521 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686527 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 12:01:54.686531 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686537 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686548 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1002 12:01:54.686558 2563543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1002 12:01:54.686562 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686568 2563543 command_runner.go:130] >       "size": "69926807",
	I1002 12:01:54.686574 2563543 command_runner.go:130] >       "uid": null,
	I1002 12:01:54.686583 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686589 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686596 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686601 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686605 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686615 2563543 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1002 12:01:54.686624 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686630 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 12:01:54.686635 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686640 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686656 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 12:01:54.686668 2563543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1002 12:01:54.686673 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686679 2563543 command_runner.go:130] >       "size": "59188020",
	I1002 12:01:54.686686 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.686691 2563543 command_runner.go:130] >         "value": "0"
	I1002 12:01:54.686696 2563543 command_runner.go:130] >       },
	I1002 12:01:54.686703 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686708 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686713 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686718 2563543 command_runner.go:130] >     },
	I1002 12:01:54.686723 2563543 command_runner.go:130] >     {
	I1002 12:01:54.686731 2563543 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1002 12:01:54.686739 2563543 command_runner.go:130] >       "repoTags": [
	I1002 12:01:54.686745 2563543 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 12:01:54.686752 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686757 2563543 command_runner.go:130] >       "repoDigests": [
	I1002 12:01:54.686766 2563543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1002 12:01:54.686778 2563543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1002 12:01:54.686783 2563543 command_runner.go:130] >       ],
	I1002 12:01:54.686790 2563543 command_runner.go:130] >       "size": "520014",
	I1002 12:01:54.686796 2563543 command_runner.go:130] >       "uid": {
	I1002 12:01:54.686801 2563543 command_runner.go:130] >         "value": "65535"
	I1002 12:01:54.686805 2563543 command_runner.go:130] >       },
	I1002 12:01:54.686810 2563543 command_runner.go:130] >       "username": "",
	I1002 12:01:54.686815 2563543 command_runner.go:130] >       "spec": null,
	I1002 12:01:54.686823 2563543 command_runner.go:130] >       "pinned": false
	I1002 12:01:54.686830 2563543 command_runner.go:130] >     }
	I1002 12:01:54.686834 2563543 command_runner.go:130] >   ]
	I1002 12:01:54.686841 2563543 command_runner.go:130] > }
	I1002 12:01:54.686974 2563543 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:01:54.686986 2563543 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:01:54.687066 2563543 ssh_runner.go:195] Run: crio config
	I1002 12:01:54.738719 2563543 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 12:01:54.738747 2563543 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 12:01:54.738756 2563543 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 12:01:54.738763 2563543 command_runner.go:130] > #
	I1002 12:01:54.738773 2563543 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 12:01:54.738781 2563543 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 12:01:54.738793 2563543 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 12:01:54.738811 2563543 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 12:01:54.738820 2563543 command_runner.go:130] > # reload'.
	I1002 12:01:54.738828 2563543 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 12:01:54.738842 2563543 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 12:01:54.738850 2563543 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 12:01:54.738860 2563543 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 12:01:54.738865 2563543 command_runner.go:130] > [crio]
	I1002 12:01:54.738873 2563543 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 12:01:54.738881 2563543 command_runner.go:130] > # containers images, in this directory.
	I1002 12:01:54.738891 2563543 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 12:01:54.738902 2563543 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 12:01:54.739132 2563543 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1002 12:01:54.739150 2563543 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 12:01:54.739159 2563543 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 12:01:54.739165 2563543 command_runner.go:130] > # storage_driver = "vfs"
	I1002 12:01:54.739174 2563543 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 12:01:54.739185 2563543 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 12:01:54.739190 2563543 command_runner.go:130] > # storage_option = [
	I1002 12:01:54.739424 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.739439 2563543 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 12:01:54.739447 2563543 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 12:01:54.739455 2563543 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 12:01:54.739470 2563543 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 12:01:54.739478 2563543 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 12:01:54.739487 2563543 command_runner.go:130] > # always happen on a node reboot
	I1002 12:01:54.739494 2563543 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 12:01:54.739501 2563543 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 12:01:54.739511 2563543 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 12:01:54.739522 2563543 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 12:01:54.739532 2563543 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 12:01:54.739542 2563543 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 12:01:54.739551 2563543 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 12:01:54.739560 2563543 command_runner.go:130] > # internal_wipe = true
	I1002 12:01:54.739567 2563543 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 12:01:54.739580 2563543 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 12:01:54.739587 2563543 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 12:01:54.739594 2563543 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 12:01:54.739609 2563543 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 12:01:54.739614 2563543 command_runner.go:130] > [crio.api]
	I1002 12:01:54.739627 2563543 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 12:01:54.739633 2563543 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 12:01:54.739639 2563543 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 12:01:54.739647 2563543 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 12:01:54.739655 2563543 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 12:01:54.739664 2563543 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 12:01:54.739670 2563543 command_runner.go:130] > # stream_port = "0"
	I1002 12:01:54.739681 2563543 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 12:01:54.739687 2563543 command_runner.go:130] > # stream_enable_tls = false
	I1002 12:01:54.739699 2563543 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 12:01:54.739705 2563543 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 12:01:54.739713 2563543 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 12:01:54.739720 2563543 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 12:01:54.739724 2563543 command_runner.go:130] > # minutes.
	I1002 12:01:54.739729 2563543 command_runner.go:130] > # stream_tls_cert = ""
	I1002 12:01:54.739737 2563543 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 12:01:54.739744 2563543 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 12:01:54.739752 2563543 command_runner.go:130] > # stream_tls_key = ""
	I1002 12:01:54.739759 2563543 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 12:01:54.739767 2563543 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 12:01:54.739777 2563543 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 12:01:54.739783 2563543 command_runner.go:130] > # stream_tls_ca = ""
	I1002 12:01:54.739797 2563543 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 12:01:54.739803 2563543 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 12:01:54.739815 2563543 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 12:01:54.739822 2563543 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 12:01:54.739854 2563543 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 12:01:54.739867 2563543 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 12:01:54.739872 2563543 command_runner.go:130] > [crio.runtime]
	I1002 12:01:54.739879 2563543 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 12:01:54.739890 2563543 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 12:01:54.739895 2563543 command_runner.go:130] > # "nofile=1024:2048"
	I1002 12:01:54.739909 2563543 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 12:01:54.739914 2563543 command_runner.go:130] > # default_ulimits = [
	I1002 12:01:54.739922 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.739929 2563543 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 12:01:54.739937 2563543 command_runner.go:130] > # no_pivot = false
	I1002 12:01:54.739948 2563543 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 12:01:54.739956 2563543 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 12:01:54.739962 2563543 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 12:01:54.739971 2563543 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 12:01:54.739979 2563543 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 12:01:54.739991 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 12:01:54.739996 2563543 command_runner.go:130] > # conmon = ""
	I1002 12:01:54.740002 2563543 command_runner.go:130] > # Cgroup setting for conmon
	I1002 12:01:54.740013 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 12:01:54.740018 2563543 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 12:01:54.740026 2563543 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 12:01:54.740036 2563543 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 12:01:54.740100 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 12:01:54.740106 2563543 command_runner.go:130] > # conmon_env = [
	I1002 12:01:54.740113 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.740120 2563543 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 12:01:54.740129 2563543 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 12:01:54.740137 2563543 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 12:01:54.740147 2563543 command_runner.go:130] > # default_env = [
	I1002 12:01:54.740152 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.740159 2563543 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 12:01:54.740168 2563543 command_runner.go:130] > # selinux = false
	I1002 12:01:54.740176 2563543 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 12:01:54.740189 2563543 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 12:01:54.740196 2563543 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 12:01:54.740201 2563543 command_runner.go:130] > # seccomp_profile = ""
	I1002 12:01:54.740210 2563543 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 12:01:54.740220 2563543 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 12:01:54.740228 2563543 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 12:01:54.740239 2563543 command_runner.go:130] > # which might increase security.
	I1002 12:01:54.740246 2563543 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1002 12:01:54.740257 2563543 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 12:01:54.740265 2563543 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 12:01:54.740276 2563543 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 12:01:54.740283 2563543 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 12:01:54.740291 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:01:54.740301 2563543 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 12:01:54.740311 2563543 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 12:01:54.740319 2563543 command_runner.go:130] > # the cgroup blockio controller.
	I1002 12:01:54.740601 2563543 command_runner.go:130] > # blockio_config_file = ""
	I1002 12:01:54.740620 2563543 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 12:01:54.740627 2563543 command_runner.go:130] > # irqbalance daemon.
	I1002 12:01:54.740634 2563543 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 12:01:54.740644 2563543 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 12:01:54.740655 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:01:54.740661 2563543 command_runner.go:130] > # rdt_config_file = ""
	I1002 12:01:54.740673 2563543 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 12:01:54.740678 2563543 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 12:01:54.740691 2563543 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 12:01:54.740697 2563543 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 12:01:54.740709 2563543 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 12:01:54.740717 2563543 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 12:01:54.740722 2563543 command_runner.go:130] > # will be added.
	I1002 12:01:54.740729 2563543 command_runner.go:130] > # default_capabilities = [
	I1002 12:01:54.740734 2563543 command_runner.go:130] > # 	"CHOWN",
	I1002 12:01:54.740745 2563543 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 12:01:54.740750 2563543 command_runner.go:130] > # 	"FSETID",
	I1002 12:01:54.740755 2563543 command_runner.go:130] > # 	"FOWNER",
	I1002 12:01:54.740764 2563543 command_runner.go:130] > # 	"SETGID",
	I1002 12:01:54.740769 2563543 command_runner.go:130] > # 	"SETUID",
	I1002 12:01:54.740774 2563543 command_runner.go:130] > # 	"SETPCAP",
	I1002 12:01:54.740784 2563543 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 12:01:54.740789 2563543 command_runner.go:130] > # 	"KILL",
	I1002 12:01:54.741011 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741034 2563543 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 12:01:54.741043 2563543 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 12:01:54.741052 2563543 command_runner.go:130] > # add_inheritable_capabilities = true
	I1002 12:01:54.741061 2563543 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 12:01:54.741072 2563543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 12:01:54.741078 2563543 command_runner.go:130] > # default_sysctls = [
	I1002 12:01:54.741087 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741093 2563543 command_runner.go:130] > # List of devices on the host that a
	I1002 12:01:54.741101 2563543 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 12:01:54.741110 2563543 command_runner.go:130] > # allowed_devices = [
	I1002 12:01:54.741122 2563543 command_runner.go:130] > # 	"/dev/fuse",
	I1002 12:01:54.741127 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741138 2563543 command_runner.go:130] > # List of additional devices. specified as
	I1002 12:01:54.741158 2563543 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 12:01:54.741170 2563543 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 12:01:54.741178 2563543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 12:01:54.741192 2563543 command_runner.go:130] > # additional_devices = [
	I1002 12:01:54.741197 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741208 2563543 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 12:01:54.741214 2563543 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 12:01:54.741221 2563543 command_runner.go:130] > # 	"/etc/cdi",
	I1002 12:01:54.741229 2563543 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 12:01:54.741234 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741244 2563543 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 12:01:54.741256 2563543 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 12:01:54.741261 2563543 command_runner.go:130] > # Defaults to false.
	I1002 12:01:54.741267 2563543 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 12:01:54.741279 2563543 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 12:01:54.741287 2563543 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 12:01:54.741295 2563543 command_runner.go:130] > # hooks_dir = [
	I1002 12:01:54.741302 2563543 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 12:01:54.741307 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.741316 2563543 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 12:01:54.741325 2563543 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 12:01:54.741335 2563543 command_runner.go:130] > # its default mounts from the following two files:
	I1002 12:01:54.741342 2563543 command_runner.go:130] > #
	I1002 12:01:54.741350 2563543 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 12:01:54.741359 2563543 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 12:01:54.741369 2563543 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 12:01:54.741374 2563543 command_runner.go:130] > #
	I1002 12:01:54.741381 2563543 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 12:01:54.741393 2563543 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 12:01:54.741401 2563543 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 12:01:54.741412 2563543 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 12:01:54.741416 2563543 command_runner.go:130] > #
	I1002 12:01:54.741421 2563543 command_runner.go:130] > # default_mounts_file = ""
	I1002 12:01:54.741431 2563543 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 12:01:54.741442 2563543 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 12:01:54.741447 2563543 command_runner.go:130] > # pids_limit = 0
	I1002 12:01:54.741460 2563543 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 12:01:54.741467 2563543 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 12:01:54.741480 2563543 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 12:01:54.741490 2563543 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 12:01:54.741498 2563543 command_runner.go:130] > # log_size_max = -1
	I1002 12:01:54.741507 2563543 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 12:01:54.741514 2563543 command_runner.go:130] > # log_to_journald = false
	I1002 12:01:54.741522 2563543 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 12:01:54.741532 2563543 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 12:01:54.741538 2563543 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 12:01:54.741545 2563543 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 12:01:54.741555 2563543 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 12:01:54.741561 2563543 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 12:01:54.741568 2563543 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 12:01:54.741584 2563543 command_runner.go:130] > # read_only = false
	I1002 12:01:54.741610 2563543 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 12:01:54.741623 2563543 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 12:01:54.741629 2563543 command_runner.go:130] > # live configuration reload.
	I1002 12:01:54.741636 2563543 command_runner.go:130] > # log_level = "info"
	I1002 12:01:54.741648 2563543 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 12:01:54.741654 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:01:54.741915 2563543 command_runner.go:130] > # log_filter = ""
	I1002 12:01:54.741934 2563543 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 12:01:54.741961 2563543 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 12:01:54.741977 2563543 command_runner.go:130] > # separated by comma.
	I1002 12:01:54.741983 2563543 command_runner.go:130] > # uid_mappings = ""
	I1002 12:01:54.741995 2563543 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 12:01:54.742002 2563543 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 12:01:54.742011 2563543 command_runner.go:130] > # separated by comma.
	I1002 12:01:54.742016 2563543 command_runner.go:130] > # gid_mappings = ""
	I1002 12:01:54.742071 2563543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 12:01:54.742079 2563543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 12:01:54.742093 2563543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 12:01:54.742099 2563543 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 12:01:54.742107 2563543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 12:01:54.742117 2563543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 12:01:54.742133 2563543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 12:01:54.742142 2563543 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 12:01:54.742150 2563543 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 12:01:54.742164 2563543 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 12:01:54.742173 2563543 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 12:01:54.742182 2563543 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 12:01:54.742190 2563543 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 12:01:54.742197 2563543 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 12:01:54.742221 2563543 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 12:01:54.742231 2563543 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 12:01:54.742237 2563543 command_runner.go:130] > # drop_infra_ctr = true
	I1002 12:01:54.742250 2563543 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 12:01:54.742259 2563543 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 12:01:54.742272 2563543 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 12:01:54.742285 2563543 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 12:01:54.742293 2563543 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 12:01:54.742303 2563543 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 12:01:54.742309 2563543 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 12:01:54.742324 2563543 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 12:01:54.742329 2563543 command_runner.go:130] > # pinns_path = ""
	I1002 12:01:54.742341 2563543 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 12:01:54.742355 2563543 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 12:01:54.742367 2563543 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 12:01:54.742372 2563543 command_runner.go:130] > # default_runtime = "runc"
	I1002 12:01:54.742380 2563543 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 12:01:54.742389 2563543 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 12:01:54.742403 2563543 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 12:01:54.742416 2563543 command_runner.go:130] > # creation as a file is not desired either.
	I1002 12:01:54.742434 2563543 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 12:01:54.742446 2563543 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 12:01:54.742452 2563543 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 12:01:54.742456 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.742464 2563543 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 12:01:54.742476 2563543 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 12:01:54.742485 2563543 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 12:01:54.742496 2563543 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 12:01:54.742506 2563543 command_runner.go:130] > #
	I1002 12:01:54.742516 2563543 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 12:01:54.742522 2563543 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 12:01:54.742528 2563543 command_runner.go:130] > #  runtime_type = "oci"
	I1002 12:01:54.742534 2563543 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 12:01:54.742541 2563543 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 12:01:54.742549 2563543 command_runner.go:130] > #  allowed_annotations = []
	I1002 12:01:54.742554 2563543 command_runner.go:130] > # Where:
	I1002 12:01:54.742561 2563543 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 12:01:54.742581 2563543 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 12:01:54.742593 2563543 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 12:01:54.742601 2563543 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 12:01:54.742609 2563543 command_runner.go:130] > #   in $PATH.
	I1002 12:01:54.742616 2563543 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 12:01:54.742622 2563543 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 12:01:54.742630 2563543 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 12:01:54.742638 2563543 command_runner.go:130] > #   state.
	I1002 12:01:54.742715 2563543 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 12:01:54.742738 2563543 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 12:01:54.742746 2563543 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 12:01:54.742758 2563543 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 12:01:54.742774 2563543 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 12:01:54.742783 2563543 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 12:01:54.742791 2563543 command_runner.go:130] > #   The currently recognized values are:
	I1002 12:01:54.742807 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 12:01:54.742820 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 12:01:54.742827 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 12:01:54.742835 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 12:01:54.742849 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 12:01:54.742857 2563543 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 12:01:54.742868 2563543 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 12:01:54.742883 2563543 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 12:01:54.742893 2563543 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 12:01:54.743188 2563543 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 12:01:54.743208 2563543 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1002 12:01:54.743214 2563543 command_runner.go:130] > runtime_type = "oci"
	I1002 12:01:54.743219 2563543 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 12:01:54.743224 2563543 command_runner.go:130] > runtime_config_path = ""
	I1002 12:01:54.743235 2563543 command_runner.go:130] > monitor_path = ""
	I1002 12:01:54.743241 2563543 command_runner.go:130] > monitor_cgroup = ""
	I1002 12:01:54.743253 2563543 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 12:01:54.743286 2563543 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 12:01:54.743296 2563543 command_runner.go:130] > # running containers
	I1002 12:01:54.743307 2563543 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 12:01:54.743323 2563543 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 12:01:54.743348 2563543 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 12:01:54.743359 2563543 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 12:01:54.743369 2563543 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 12:01:54.743379 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 12:01:54.743389 2563543 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 12:01:54.743398 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 12:01:54.743405 2563543 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 12:01:54.743411 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 12:01:54.743419 2563543 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 12:01:54.743429 2563543 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 12:01:54.743440 2563543 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 12:01:54.743453 2563543 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 12:01:54.743466 2563543 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 12:01:54.743477 2563543 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 12:01:54.743495 2563543 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 12:01:54.743506 2563543 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 12:01:54.743517 2563543 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 12:01:54.743529 2563543 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 12:01:54.743537 2563543 command_runner.go:130] > # Example:
	I1002 12:01:54.743543 2563543 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 12:01:54.743552 2563543 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 12:01:54.743563 2563543 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 12:01:54.743570 2563543 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 12:01:54.743576 2563543 command_runner.go:130] > # cpuset = 0
	I1002 12:01:54.743581 2563543 command_runner.go:130] > # cpushares = "0-1"
	I1002 12:01:54.743588 2563543 command_runner.go:130] > # Where:
	I1002 12:01:54.743595 2563543 command_runner.go:130] > # The workload name is workload-type.
	I1002 12:01:54.743607 2563543 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 12:01:54.743617 2563543 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 12:01:54.743627 2563543 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 12:01:54.743640 2563543 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 12:01:54.743650 2563543 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 12:01:54.743654 2563543 command_runner.go:130] > # 
	I1002 12:01:54.743662 2563543 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 12:01:54.743673 2563543 command_runner.go:130] > #
	I1002 12:01:54.743682 2563543 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 12:01:54.743693 2563543 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 12:01:54.743704 2563543 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 12:01:54.743716 2563543 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 12:01:54.743800 2563543 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 12:01:54.743810 2563543 command_runner.go:130] > [crio.image]
	I1002 12:01:54.743818 2563543 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 12:01:54.743823 2563543 command_runner.go:130] > # default_transport = "docker://"
	I1002 12:01:54.743831 2563543 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 12:01:54.743839 2563543 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 12:01:54.743850 2563543 command_runner.go:130] > # global_auth_file = ""
	I1002 12:01:54.743857 2563543 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 12:01:54.743876 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:01:54.743886 2563543 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 12:01:54.743906 2563543 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 12:01:54.743924 2563543 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 12:01:54.743936 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:01:54.743951 2563543 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 12:01:54.743962 2563543 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 12:01:54.743975 2563543 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 12:01:54.743983 2563543 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 12:01:54.743997 2563543 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 12:01:54.744006 2563543 command_runner.go:130] > # pause_command = "/pause"
	I1002 12:01:54.744014 2563543 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 12:01:54.744030 2563543 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 12:01:54.744048 2563543 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 12:01:54.744056 2563543 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 12:01:54.744062 2563543 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 12:01:54.744071 2563543 command_runner.go:130] > # signature_policy = ""
	I1002 12:01:54.744080 2563543 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 12:01:54.744091 2563543 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 12:01:54.744105 2563543 command_runner.go:130] > # changing them here.
	I1002 12:01:54.744114 2563543 command_runner.go:130] > # insecure_registries = [
	I1002 12:01:54.744118 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.744130 2563543 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 12:01:54.744136 2563543 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 12:01:54.744149 2563543 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 12:01:54.744160 2563543 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 12:01:54.744168 2563543 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 12:01:54.744185 2563543 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 12:01:54.744194 2563543 command_runner.go:130] > # CNI plugins.
	I1002 12:01:54.744199 2563543 command_runner.go:130] > [crio.network]
	I1002 12:01:54.744210 2563543 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 12:01:54.744217 2563543 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 12:01:54.744223 2563543 command_runner.go:130] > # cni_default_network = ""
	I1002 12:01:54.744233 2563543 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 12:01:54.744242 2563543 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 12:01:54.744259 2563543 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 12:01:54.744267 2563543 command_runner.go:130] > # plugin_dirs = [
	I1002 12:01:54.744272 2563543 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 12:01:54.744280 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.744288 2563543 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 12:01:54.744296 2563543 command_runner.go:130] > [crio.metrics]
	I1002 12:01:54.744303 2563543 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 12:01:54.744312 2563543 command_runner.go:130] > # enable_metrics = false
	I1002 12:01:54.744318 2563543 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 12:01:54.744332 2563543 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 12:01:54.744344 2563543 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 12:01:54.744355 2563543 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 12:01:54.744365 2563543 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 12:01:54.744373 2563543 command_runner.go:130] > # metrics_collectors = [
	I1002 12:01:54.744724 2563543 command_runner.go:130] > # 	"operations",
	I1002 12:01:54.744755 2563543 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 12:01:54.744763 2563543 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 12:01:54.744769 2563543 command_runner.go:130] > # 	"operations_errors",
	I1002 12:01:54.744774 2563543 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 12:01:54.744780 2563543 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 12:01:54.744799 2563543 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 12:01:54.744816 2563543 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 12:01:54.744825 2563543 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 12:01:54.744831 2563543 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 12:01:54.744839 2563543 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 12:01:54.744845 2563543 command_runner.go:130] > # 	"containers_oom_total",
	I1002 12:01:54.744850 2563543 command_runner.go:130] > # 	"containers_oom",
	I1002 12:01:54.744855 2563543 command_runner.go:130] > # 	"processes_defunct",
	I1002 12:01:54.744870 2563543 command_runner.go:130] > # 	"operations_total",
	I1002 12:01:54.744883 2563543 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 12:01:54.744893 2563543 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 12:01:54.744902 2563543 command_runner.go:130] > # 	"operations_errors_total",
	I1002 12:01:54.744912 2563543 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 12:01:54.744918 2563543 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 12:01:54.744933 2563543 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 12:01:54.744945 2563543 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 12:01:54.744954 2563543 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 12:01:54.744960 2563543 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 12:01:54.744974 2563543 command_runner.go:130] > # ]
	I1002 12:01:54.744984 2563543 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 12:01:54.745000 2563543 command_runner.go:130] > # metrics_port = 9090
	I1002 12:01:54.745016 2563543 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 12:01:54.745023 2563543 command_runner.go:130] > # metrics_socket = ""
	I1002 12:01:54.745033 2563543 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 12:01:54.745041 2563543 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 12:01:54.745052 2563543 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 12:01:54.745062 2563543 command_runner.go:130] > # certificate on any modification event.
	I1002 12:01:54.745073 2563543 command_runner.go:130] > # metrics_cert = ""
	I1002 12:01:54.745083 2563543 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 12:01:54.745098 2563543 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 12:01:54.745103 2563543 command_runner.go:130] > # metrics_key = ""
	I1002 12:01:54.745111 2563543 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 12:01:54.745125 2563543 command_runner.go:130] > [crio.tracing]
	I1002 12:01:54.745135 2563543 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 12:01:54.745144 2563543 command_runner.go:130] > # enable_tracing = false
	I1002 12:01:54.745154 2563543 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 12:01:54.745169 2563543 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 12:01:54.745180 2563543 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 12:01:54.745187 2563543 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 12:01:54.745194 2563543 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 12:01:54.745206 2563543 command_runner.go:130] > [crio.stats]
	I1002 12:01:54.745214 2563543 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 12:01:54.745224 2563543 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 12:01:54.745242 2563543 command_runner.go:130] > # stats_collection_period = 0
	I1002 12:01:54.746908 2563543 command_runner.go:130] ! time="2023-10-02 12:01:54.735972790Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1002 12:01:54.746966 2563543 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 12:01:54.747060 2563543 cni.go:84] Creating CNI manager for ""
	I1002 12:01:54.747073 2563543 cni.go:136] 1 nodes found, recommending kindnet
	I1002 12:01:54.747111 2563543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 12:01:54.747134 2563543 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-361100 NodeName:multinode-361100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:01:54.747312 2563543 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-361100"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:01:54.747397 2563543 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-361100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:01:54.747481 2563543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:01:54.757853 2563543 command_runner.go:130] > kubeadm
	I1002 12:01:54.757872 2563543 command_runner.go:130] > kubectl
	I1002 12:01:54.757877 2563543 command_runner.go:130] > kubelet
	I1002 12:01:54.759181 2563543 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:01:54.759272 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:01:54.770695 2563543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1002 12:01:54.792772 2563543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:01:54.817307 2563543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1002 12:01:54.839769 2563543 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 12:01:54.845174 2563543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:01:54.859888 2563543 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100 for IP: 192.168.58.2
	I1002 12:01:54.859918 2563543 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:54.860150 2563543 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 12:01:54.860202 2563543 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 12:01:54.860252 2563543 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key
	I1002 12:01:54.860269 2563543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt with IP's: []
	I1002 12:01:55.093483 2563543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt ...
	I1002 12:01:55.093519 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt: {Name:mk74feb1b5f0dc8a318c41eeb95de4ca911703fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:55.093768 2563543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key ...
	I1002 12:01:55.093781 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key: {Name:mk21c1af63a3efc73d18924c2ecbbbcd125f0503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:55.093902 2563543 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key.cee25041
	I1002 12:01:55.093916 2563543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 12:01:55.261369 2563543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt.cee25041 ...
	I1002 12:01:55.261403 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt.cee25041: {Name:mka33c5f124f1fe540256523be4edf3fce493fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:55.261608 2563543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key.cee25041 ...
	I1002 12:01:55.261625 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key.cee25041: {Name:mkd3c54cd429f92bea0d307e91085af5a857c06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:55.261729 2563543 certs.go:337] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt
	I1002 12:01:55.261825 2563543 certs.go:341] copying /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key
	I1002 12:01:55.261887 2563543 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.key
	I1002 12:01:55.261905 2563543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.crt with IP's: []
	I1002 12:01:56.013512 2563543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.crt ...
	I1002 12:01:56.013545 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.crt: {Name:mk2ce9398755c585646c265645d44a0bbab58b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:56.013749 2563543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.key ...
	I1002 12:01:56.013764 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.key: {Name:mk7b46556f9b893327360dac3d85332a512a8bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:01:56.013853 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 12:01:56.013876 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 12:01:56.013888 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 12:01:56.013902 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 12:01:56.013914 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 12:01:56.013932 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 12:01:56.013946 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 12:01:56.013960 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 12:01:56.014022 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 12:01:56.014064 2563543 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 12:01:56.014079 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:01:56.014105 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:01:56.014136 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:01:56.014196 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 12:01:56.014249 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:01:56.014275 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:01:56.014293 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem -> /usr/share/ca-certificates/2499598.pem
	I1002 12:01:56.014310 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /usr/share/ca-certificates/24995982.pem
	I1002 12:01:56.014939 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:01:56.046533 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 12:01:56.077555 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:01:56.107780 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 12:01:56.136796 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:01:56.166484 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 12:01:56.195938 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:01:56.225579 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 12:01:56.255513 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:01:56.285473 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 12:01:56.314288 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 12:01:56.343818 2563543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:01:56.366119 2563543 ssh_runner.go:195] Run: openssl version
	I1002 12:01:56.373127 2563543 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 12:01:56.373532 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:01:56.386027 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:01:56.391187 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:01:56.391233 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:01:56.391314 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:01:56.400062 2563543 command_runner.go:130] > b5213941
	I1002 12:01:56.400572 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:01:56.413147 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 12:01:56.425332 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 12:01:56.430077 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:01:56.430367 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:01:56.430455 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 12:01:56.439207 2563543 command_runner.go:130] > 51391683
	I1002 12:01:56.439652 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 12:01:56.452427 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 12:01:56.465313 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 12:01:56.470422 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:01:56.470498 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:01:56.470566 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 12:01:56.478843 2563543 command_runner.go:130] > 3ec20f2e
	I1002 12:01:56.479297 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:01:56.491886 2563543 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:01:56.496579 2563543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:01:56.496630 2563543 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:01:56.496671 2563543 kubeadm.go:404] StartCluster: {Name:multinode-361100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:01:56.496759 2563543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:01:56.496826 2563543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:01:56.539203 2563543 cri.go:89] found id: ""
	I1002 12:01:56.539277 2563543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 12:01:56.550477 2563543 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1002 12:01:56.550505 2563543 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1002 12:01:56.550514 2563543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1002 12:01:56.550590 2563543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:01:56.562552 2563543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 12:01:56.562628 2563543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:01:56.573605 2563543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1002 12:01:56.573629 2563543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1002 12:01:56.573638 2563543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1002 12:01:56.573647 2563543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:01:56.573705 2563543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:01:56.573750 2563543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 12:01:56.630224 2563543 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:01:56.630325 2563543 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1002 12:01:56.630706 2563543 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:01:56.630762 2563543 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 12:01:56.677157 2563543 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 12:01:56.677229 2563543 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 12:01:56.677314 2563543 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 12:01:56.677339 2563543 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 12:01:56.677385 2563543 kubeadm.go:322] OS: Linux
	I1002 12:01:56.677417 2563543 command_runner.go:130] > OS: Linux
	I1002 12:01:56.677478 2563543 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 12:01:56.677500 2563543 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 12:01:56.677567 2563543 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 12:01:56.677592 2563543 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 12:01:56.677652 2563543 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 12:01:56.677694 2563543 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 12:01:56.677767 2563543 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 12:01:56.677790 2563543 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 12:01:56.677867 2563543 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 12:01:56.677890 2563543 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 12:01:56.677966 2563543 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 12:01:56.677996 2563543 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 12:01:56.678068 2563543 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1002 12:01:56.678090 2563543 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 12:01:56.678181 2563543 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1002 12:01:56.678204 2563543 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 12:01:56.678285 2563543 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1002 12:01:56.678317 2563543 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 12:01:56.760253 2563543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:01:56.760323 2563543 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:01:56.760440 2563543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:01:56.760463 2563543 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:01:56.760586 2563543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:01:56.760612 2563543 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:01:57.033334 2563543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:01:57.035555 2563543 out.go:204]   - Generating certificates and keys ...
	I1002 12:01:57.033410 2563543 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:01:57.035829 2563543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:01:57.035868 2563543 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 12:01:57.035976 2563543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:01:57.036010 2563543 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 12:01:57.744449 2563543 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 12:01:57.744472 2563543 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 12:01:58.933073 2563543 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 12:01:58.933104 2563543 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1002 12:01:59.414372 2563543 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 12:01:59.414396 2563543 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1002 12:01:59.705070 2563543 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 12:01:59.705098 2563543 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1002 12:01:59.996584 2563543 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 12:01:59.996612 2563543 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1002 12:01:59.996963 2563543 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-361100] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 12:01:59.996982 2563543 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-361100] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 12:02:00.591514 2563543 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 12:02:00.591545 2563543 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1002 12:02:00.591960 2563543 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-361100] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 12:02:00.591981 2563543 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-361100] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 12:02:00.707053 2563543 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 12:02:00.707078 2563543 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 12:02:01.095257 2563543 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 12:02:01.095288 2563543 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 12:02:01.271573 2563543 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 12:02:01.271602 2563543 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1002 12:02:01.271863 2563543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:02:01.271879 2563543 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:02:01.814488 2563543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:02:01.814518 2563543 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:02:02.052217 2563543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:02:02.052251 2563543 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:02:02.332388 2563543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:02:02.332413 2563543 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:02:03.187467 2563543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:02:03.187495 2563543 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:02:03.188350 2563543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:02:03.188376 2563543 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:02:03.193310 2563543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:02:03.195825 2563543 out.go:204]   - Booting up control plane ...
	I1002 12:02:03.193376 2563543 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:02:03.195976 2563543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:02:03.195996 2563543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:02:03.196076 2563543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:02:03.196088 2563543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:02:03.197375 2563543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:02:03.197397 2563543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:02:03.208969 2563543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:02:03.209001 2563543 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:02:03.210062 2563543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:02:03.210085 2563543 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:02:03.210156 2563543 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:02:03.210169 2563543 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 12:02:03.320137 2563543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:02:03.320168 2563543 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:02:11.323095 2563543 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002362 seconds
	I1002 12:02:11.323119 2563543 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002362 seconds
	I1002 12:02:11.323220 2563543 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:02:11.323226 2563543 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:02:11.337687 2563543 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:02:11.337719 2563543 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:02:11.864775 2563543 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:02:11.864801 2563543 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:02:11.865059 2563543 kubeadm.go:322] [mark-control-plane] Marking the node multinode-361100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:02:11.865069 2563543 command_runner.go:130] > [mark-control-plane] Marking the node multinode-361100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:02:12.376182 2563543 kubeadm.go:322] [bootstrap-token] Using token: xi7929.itt5r9ugk5j1xm91
	I1002 12:02:12.377976 2563543 out.go:204]   - Configuring RBAC rules ...
	I1002 12:02:12.376216 2563543 command_runner.go:130] > [bootstrap-token] Using token: xi7929.itt5r9ugk5j1xm91
	I1002 12:02:12.378095 2563543 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:02:12.378106 2563543 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:02:12.384567 2563543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:02:12.384590 2563543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:02:12.393212 2563543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:02:12.393237 2563543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:02:12.397358 2563543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:02:12.397389 2563543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:02:12.403188 2563543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:02:12.403212 2563543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:02:12.407330 2563543 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:02:12.407353 2563543 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:02:12.421326 2563543 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:02:12.421351 2563543 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:02:12.701870 2563543 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:02:12.701896 2563543 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 12:02:12.827482 2563543 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:02:12.827507 2563543 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 12:02:12.827514 2563543 kubeadm.go:322] 
	I1002 12:02:12.827570 2563543 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:02:12.827581 2563543 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1002 12:02:12.827587 2563543 kubeadm.go:322] 
	I1002 12:02:12.827659 2563543 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:02:12.827668 2563543 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1002 12:02:12.827673 2563543 kubeadm.go:322] 
	I1002 12:02:12.827697 2563543 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:02:12.827707 2563543 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1002 12:02:12.827762 2563543 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:02:12.827771 2563543 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:02:12.827818 2563543 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:02:12.827826 2563543 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:02:12.827831 2563543 kubeadm.go:322] 
	I1002 12:02:12.827882 2563543 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:02:12.827890 2563543 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1002 12:02:12.827894 2563543 kubeadm.go:322] 
	I1002 12:02:12.827939 2563543 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:02:12.827945 2563543 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:02:12.827949 2563543 kubeadm.go:322] 
	I1002 12:02:12.827998 2563543 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:02:12.828006 2563543 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1002 12:02:12.828075 2563543 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:02:12.828092 2563543 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:02:12.828158 2563543 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:02:12.828169 2563543 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:02:12.828176 2563543 kubeadm.go:322] 
	I1002 12:02:12.828261 2563543 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:02:12.828272 2563543 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:02:12.828344 2563543 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:02:12.828352 2563543 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1002 12:02:12.828356 2563543 kubeadm.go:322] 
	I1002 12:02:12.828435 2563543 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xi7929.itt5r9ugk5j1xm91 \
	I1002 12:02:12.828441 2563543 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token xi7929.itt5r9ugk5j1xm91 \
	I1002 12:02:12.828579 2563543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 \
	I1002 12:02:12.828590 2563543 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 \
	I1002 12:02:12.828610 2563543 kubeadm.go:322] 	--control-plane 
	I1002 12:02:12.828617 2563543 command_runner.go:130] > 	--control-plane 
	I1002 12:02:12.828622 2563543 kubeadm.go:322] 
	I1002 12:02:12.828701 2563543 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:02:12.828709 2563543 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:02:12.828714 2563543 kubeadm.go:322] 
	I1002 12:02:12.828797 2563543 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xi7929.itt5r9ugk5j1xm91 \
	I1002 12:02:12.828806 2563543 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xi7929.itt5r9ugk5j1xm91 \
	I1002 12:02:12.828902 2563543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 
	I1002 12:02:12.828911 2563543 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 
	I1002 12:02:12.833328 2563543 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 12:02:12.833354 2563543 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 12:02:12.833499 2563543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:02:12.833512 2563543 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:02:12.833536 2563543 cni.go:84] Creating CNI manager for ""
	I1002 12:02:12.833545 2563543 cni.go:136] 1 nodes found, recommending kindnet
	I1002 12:02:12.835929 2563543 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 12:02:12.837768 2563543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 12:02:12.854793 2563543 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 12:02:12.854817 2563543 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 12:02:12.854825 2563543 command_runner.go:130] > Device: 36h/54d	Inode: 2872313     Links: 1
	I1002 12:02:12.854832 2563543 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:02:12.854840 2563543 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 12:02:12.854846 2563543 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 12:02:12.854852 2563543 command_runner.go:130] > Change: 2023-10-02 07:16:56.284353094 +0000
	I1002 12:02:12.854858 2563543 command_runner.go:130] >  Birth: 2023-10-02 07:16:56.244353026 +0000
	I1002 12:02:12.855226 2563543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 12:02:12.855245 2563543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 12:02:12.890003 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 12:02:13.768292 2563543 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1002 12:02:13.775161 2563543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1002 12:02:13.786537 2563543 command_runner.go:130] > serviceaccount/kindnet created
	I1002 12:02:13.798867 2563543 command_runner.go:130] > daemonset.apps/kindnet created
	I1002 12:02:13.805167 2563543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:02:13.805305 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:13.805389 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=multinode-361100 minikube.k8s.io/updated_at=2023_10_02T12_02_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:13.825172 2563543 command_runner.go:130] > -16
	I1002 12:02:13.826245 2563543 ops.go:34] apiserver oom_adj: -16
	I1002 12:02:13.972056 2563543 command_runner.go:130] > node/multinode-361100 labeled
	I1002 12:02:13.975709 2563543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1002 12:02:13.975814 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:14.109709 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:14.109803 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:14.209881 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:14.710704 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:14.807224 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:15.210937 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:15.307225 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:15.710869 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:15.803914 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:16.210310 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:16.303739 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:16.710167 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:16.803153 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:17.210779 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:17.319645 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:17.710887 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:17.808048 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:18.210653 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:18.307675 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:18.710148 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:18.807276 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:19.210994 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:19.308328 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:19.710652 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:19.798400 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:20.210409 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:20.309134 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:20.710812 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:20.807364 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:21.210983 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:21.309831 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:21.710332 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:21.809411 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:22.210163 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:22.312277 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:22.710875 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:22.805104 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:23.210781 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:23.313158 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:23.710815 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:23.803079 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:24.210608 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:24.309147 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:24.710676 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:24.811704 2563543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 12:02:25.210142 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:02:25.334871 2563543 command_runner.go:130] > NAME      SECRETS   AGE
	I1002 12:02:25.334893 2563543 command_runner.go:130] > default   0         1s
	I1002 12:02:25.338528 2563543 kubeadm.go:1081] duration metric: took 11.533267146s to wait for elevateKubeSystemPrivileges.
	I1002 12:02:25.338559 2563543 kubeadm.go:406] StartCluster complete in 28.841892327s
	I1002 12:02:25.338578 2563543 settings.go:142] acquiring lock: {Name:mkcc97fc5770241202468070273c0755324bf4b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:02:25.338641 2563543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:02:25.339358 2563543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/kubeconfig: {Name:mkf500c5450045c9557e34c3a61a2f3f38c10ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:02:25.339880 2563543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:02:25.340177 2563543 kapi.go:59] client config for multinode-361100: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 12:02:25.340678 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:02:25.341008 2563543 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:02:25.341117 2563543 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:02:25.341184 2563543 addons.go:69] Setting storage-provisioner=true in profile "multinode-361100"
	I1002 12:02:25.341198 2563543 addons.go:231] Setting addon storage-provisioner=true in "multinode-361100"
	I1002 12:02:25.341239 2563543 host.go:66] Checking if "multinode-361100" exists ...
	I1002 12:02:25.341698 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:02:25.342632 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 12:02:25.342647 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:25.342656 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:25.342663 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:25.342883 2563543 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 12:02:25.343312 2563543 addons.go:69] Setting default-storageclass=true in profile "multinode-361100"
	I1002 12:02:25.343336 2563543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-361100"
	I1002 12:02:25.343640 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:02:25.385477 2563543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:02:25.387476 2563543 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:02:25.387496 2563543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:02:25.387566 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:02:25.407584 2563543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:02:25.407846 2563543 kapi.go:59] client config for multinode-361100: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 12:02:25.408122 2563543 addons.go:231] Setting addon default-storageclass=true in "multinode-361100"
	I1002 12:02:25.408152 2563543 host.go:66] Checking if "multinode-361100" exists ...
	I1002 12:02:25.408651 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:02:25.429798 2563543 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I1002 12:02:25.429824 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:25.429835 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:25.429842 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:25.429848 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:25.429854 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:25.429861 2563543 round_trippers.go:580]     Content-Length: 291
	I1002 12:02:25.429867 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:25 GMT
	I1002 12:02:25.429873 2563543 round_trippers.go:580]     Audit-Id: 1837a6f2-4faf-48a4-bcd7-cfcf6e1a60e0
	I1002 12:02:25.429898 2563543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf3eab07-a74c-49e3-9e4d-6831eea2cf38","resourceVersion":"329","creationTimestamp":"2023-10-02T12:02:12Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 12:02:25.430331 2563543 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf3eab07-a74c-49e3-9e4d-6831eea2cf38","resourceVersion":"329","creationTimestamp":"2023-10-02T12:02:12Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 12:02:25.430388 2563543 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 12:02:25.430398 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:25.430406 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:25.430413 2563543 round_trippers.go:473]     Content-Type: application/json
	I1002 12:02:25.430419 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:25.434829 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:02:25.464807 2563543 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:02:25.464829 2563543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:02:25.464895 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:02:25.493813 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:02:25.529109 2563543 round_trippers.go:574] Response Status: 200 OK in 98 milliseconds
	I1002 12:02:25.529131 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:25.529140 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:25.529146 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:25.529153 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:25.529159 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:25.529166 2563543 round_trippers.go:580]     Content-Length: 291
	I1002 12:02:25.529172 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:25 GMT
	I1002 12:02:25.529178 2563543 round_trippers.go:580]     Audit-Id: ff99f3ec-cf62-4411-b140-4c0600cbdaf6
	I1002 12:02:25.531847 2563543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf3eab07-a74c-49e3-9e4d-6831eea2cf38","resourceVersion":"350","creationTimestamp":"2023-10-02T12:02:12Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 12:02:25.532019 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 12:02:25.532027 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:25.532037 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:25.532044 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:25.570816 2563543 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I1002 12:02:25.570889 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:25.570913 2563543 round_trippers.go:580]     Audit-Id: 757ee06d-3b5a-4bdc-95ce-058b280532e4
	I1002 12:02:25.570937 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:25.570973 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:25.570999 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:25.571022 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:25.571059 2563543 round_trippers.go:580]     Content-Length: 291
	I1002 12:02:25.571084 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:25 GMT
	I1002 12:02:25.571742 2563543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf3eab07-a74c-49e3-9e4d-6831eea2cf38","resourceVersion":"350","creationTimestamp":"2023-10-02T12:02:12Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 12:02:25.571983 2563543 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-361100" context rescaled to 1 replicas
	I1002 12:02:25.572033 2563543 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:02:25.575499 2563543 out.go:177] * Verifying Kubernetes components...
	I1002 12:02:25.577716 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:02:25.601808 2563543 command_runner.go:130] > apiVersion: v1
	I1002 12:02:25.601884 2563543 command_runner.go:130] > data:
	I1002 12:02:25.601914 2563543 command_runner.go:130] >   Corefile: |
	I1002 12:02:25.601931 2563543 command_runner.go:130] >     .:53 {
	I1002 12:02:25.601966 2563543 command_runner.go:130] >         errors
	I1002 12:02:25.601991 2563543 command_runner.go:130] >         health {
	I1002 12:02:25.602012 2563543 command_runner.go:130] >            lameduck 5s
	I1002 12:02:25.602048 2563543 command_runner.go:130] >         }
	I1002 12:02:25.602070 2563543 command_runner.go:130] >         ready
	I1002 12:02:25.602093 2563543 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 12:02:25.602134 2563543 command_runner.go:130] >            pods insecure
	I1002 12:02:25.602159 2563543 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 12:02:25.602180 2563543 command_runner.go:130] >            ttl 30
	I1002 12:02:25.602212 2563543 command_runner.go:130] >         }
	I1002 12:02:25.602236 2563543 command_runner.go:130] >         prometheus :9153
	I1002 12:02:25.602257 2563543 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 12:02:25.602292 2563543 command_runner.go:130] >            max_concurrent 1000
	I1002 12:02:25.602314 2563543 command_runner.go:130] >         }
	I1002 12:02:25.602331 2563543 command_runner.go:130] >         cache 30
	I1002 12:02:25.602350 2563543 command_runner.go:130] >         loop
	I1002 12:02:25.602382 2563543 command_runner.go:130] >         reload
	I1002 12:02:25.602406 2563543 command_runner.go:130] >         loadbalance
	I1002 12:02:25.602425 2563543 command_runner.go:130] >     }
	I1002 12:02:25.602459 2563543 command_runner.go:130] > kind: ConfigMap
	I1002 12:02:25.602480 2563543 command_runner.go:130] > metadata:
	I1002 12:02:25.602500 2563543 command_runner.go:130] >   creationTimestamp: "2023-10-02T12:02:12Z"
	I1002 12:02:25.602534 2563543 command_runner.go:130] >   name: coredns
	I1002 12:02:25.602556 2563543 command_runner.go:130] >   namespace: kube-system
	I1002 12:02:25.602577 2563543 command_runner.go:130] >   resourceVersion: "231"
	I1002 12:02:25.602614 2563543 command_runner.go:130] >   uid: c6a20f2f-7099-4996-abc4-0630946be3af
	I1002 12:02:25.602805 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:02:25.634378 2563543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:02:25.634759 2563543 kapi.go:59] client config for multinode-361100: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 12:02:25.635144 2563543 node_ready.go:35] waiting up to 6m0s for node "multinode-361100" to be "Ready" ...
	I1002 12:02:25.635270 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:25.635295 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:25.635330 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:25.635355 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:25.655442 2563543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:02:25.689942 2563543 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I1002 12:02:25.690015 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:25.690037 2563543 round_trippers.go:580]     Audit-Id: 4fbf2b4d-d113-47bd-9efc-d827247950e4
	I1002 12:02:25.690060 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:25.690095 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:25.690126 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:25.690148 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:25.690183 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:25 GMT
	I1002 12:02:25.691428 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:25.692333 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:25.692377 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:25.692400 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:25.692442 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:25.711584 2563543 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1002 12:02:25.711655 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:25.711678 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:25 GMT
	I1002 12:02:25.711701 2563543 round_trippers.go:580]     Audit-Id: e08a7933-95c8-48ee-8acc-ab14d4e958c7
	I1002 12:02:25.711736 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:25.711762 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:25.711784 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:25.711819 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:25.716261 2563543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:02:25.721360 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:26.222624 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:26.222699 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:26.222732 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:26.222753 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:26.287794 2563543 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I1002 12:02:26.287820 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:26.287831 2563543 round_trippers.go:580]     Audit-Id: 0e34a069-e775-4a57-9ee5-505dc8d9133a
	I1002 12:02:26.287838 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:26.287844 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:26.287851 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:26.287857 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:26.287868 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:26 GMT
	I1002 12:02:26.289010 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:26.390009 2563543 command_runner.go:130] > configmap/coredns replaced
	I1002 12:02:26.395158 2563543 start.go:923] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1002 12:02:26.604825 2563543 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1002 12:02:26.613892 2563543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1002 12:02:26.623678 2563543 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 12:02:26.633224 2563543 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 12:02:26.645204 2563543 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1002 12:02:26.659981 2563543 command_runner.go:130] > pod/storage-provisioner created
	I1002 12:02:26.667063 2563543 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1002 12:02:26.667182 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1002 12:02:26.667195 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:26.667205 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:26.667213 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:26.667321 2563543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.011796891s)
	I1002 12:02:26.678120 2563543 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1002 12:02:26.678149 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:26.678158 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:26.678164 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:26.678172 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:26.678179 2563543 round_trippers.go:580]     Content-Length: 1273
	I1002 12:02:26.678186 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:26 GMT
	I1002 12:02:26.678195 2563543 round_trippers.go:580]     Audit-Id: 056cd068-ade0-4620-8afc-60db7f48aee2
	I1002 12:02:26.678202 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:26.679448 2563543 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"381"},"items":[{"metadata":{"name":"standard","uid":"23a23545-09b2-4324-92ee-f8e4ce642919","resourceVersion":"371","creationTimestamp":"2023-10-02T12:02:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T12:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1002 12:02:26.679849 2563543 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"23a23545-09b2-4324-92ee-f8e4ce642919","resourceVersion":"371","creationTimestamp":"2023-10-02T12:02:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T12:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 12:02:26.679909 2563543 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1002 12:02:26.679921 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:26.679929 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:26.679940 2563543 round_trippers.go:473]     Content-Type: application/json
	I1002 12:02:26.679947 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:26.684189 2563543 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 12:02:26.684214 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:26.684223 2563543 round_trippers.go:580]     Audit-Id: 3221822e-47eb-45e3-9c63-e0e3cb806fb6
	I1002 12:02:26.684230 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:26.684236 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:26.684243 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:26.684249 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:26.684258 2563543 round_trippers.go:580]     Content-Length: 1220
	I1002 12:02:26.684265 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:26 GMT
	I1002 12:02:26.684432 2563543 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"23a23545-09b2-4324-92ee-f8e4ce642919","resourceVersion":"371","creationTimestamp":"2023-10-02T12:02:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T12:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 12:02:26.686976 2563543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 12:02:26.689016 2563543 addons.go:502] enable addons completed in 1.347886088s: enabled=[storage-provisioner default-storageclass]
	I1002 12:02:26.722221 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:26.722245 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:26.722256 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:26.722263 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:26.725052 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:26.725127 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:26.725154 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:26.725181 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:26.725221 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:26.725243 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:26 GMT
	I1002 12:02:26.725282 2563543 round_trippers.go:580]     Audit-Id: 1e427853-cb2b-43ca-a196-534349d6e302
	I1002 12:02:26.725307 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:26.729270 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:27.222053 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:27.222081 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:27.222090 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:27.222097 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:27.224893 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:27.225027 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:27.225051 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:27.225065 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:27.225071 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:27.225078 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:27.225086 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:27 GMT
	I1002 12:02:27.225093 2563543 round_trippers.go:580]     Audit-Id: cfc1f784-b4c6-4429-97f2-23083bfeba5f
	I1002 12:02:27.225193 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:27.722686 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:27.722713 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:27.722723 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:27.722730 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:27.725519 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:27.725611 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:27.725624 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:27.725631 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:27.725641 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:27.725648 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:27.725657 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:27 GMT
	I1002 12:02:27.725664 2563543 round_trippers.go:580]     Audit-Id: 5878a5a1-b454-412f-bdf2-cfb9e30592d9
	I1002 12:02:27.725789 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:27.726240 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:28.222889 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:28.222911 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:28.222921 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:28.222929 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:28.225495 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:28.225560 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:28.225582 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:28 GMT
	I1002 12:02:28.225604 2563543 round_trippers.go:580]     Audit-Id: 522725d4-efc9-423f-8fa2-2354ac131c1d
	I1002 12:02:28.225638 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:28.225666 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:28.225688 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:28.225711 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:28.225870 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:28.722277 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:28.722308 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:28.722318 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:28.722325 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:28.725034 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:28.725097 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:28.725119 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:28.725148 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:28 GMT
	I1002 12:02:28.725170 2563543 round_trippers.go:580]     Audit-Id: 7dd66721-8a26-48d5-b06b-ecee6abac91c
	I1002 12:02:28.725195 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:28.725216 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:28.725247 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:28.725400 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:29.222919 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:29.222945 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:29.222955 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:29.222962 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:29.225662 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:29.225698 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:29.225761 2563543 round_trippers.go:580]     Audit-Id: 733b1f8b-fc30-4251-9de9-e6aadd1d6108
	I1002 12:02:29.225769 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:29.225775 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:29.225781 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:29.225787 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:29.225795 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:29 GMT
	I1002 12:02:29.225883 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:29.722410 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:29.722438 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:29.722449 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:29.722456 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:29.725315 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:29.725338 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:29.725347 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:29.725354 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:29.725360 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:29 GMT
	I1002 12:02:29.725417 2563543 round_trippers.go:580]     Audit-Id: 07d7103e-6c8c-47bc-9bac-253319fc31c1
	I1002 12:02:29.725424 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:29.725431 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:29.725529 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:30.222112 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:30.222151 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:30.222163 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:30.222172 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:30.225382 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:30.225414 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:30.225424 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:30.225432 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:30 GMT
	I1002 12:02:30.225440 2563543 round_trippers.go:580]     Audit-Id: 5a966992-61c0-4e56-8541-915d88efc6ec
	I1002 12:02:30.225446 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:30.225452 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:30.225458 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:30.225655 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:30.226125 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:30.722821 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:30.722851 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:30.722862 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:30.722869 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:30.725805 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:30.725885 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:30.725909 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:30.725997 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:30.726012 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:30 GMT
	I1002 12:02:30.726020 2563543 round_trippers.go:580]     Audit-Id: e6fadccc-4f0f-4866-9486-9214abe10ee1
	I1002 12:02:30.726026 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:30.726032 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:30.726179 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:31.222727 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:31.222755 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:31.222765 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:31.222773 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:31.225386 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:31.225406 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:31.225415 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:31.225422 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:31 GMT
	I1002 12:02:31.225428 2563543 round_trippers.go:580]     Audit-Id: e746052c-15b7-4838-9acc-ccd559d5a17c
	I1002 12:02:31.225435 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:31.225441 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:31.225448 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:31.225606 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:31.723031 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:31.723056 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:31.723066 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:31.723074 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:31.725713 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:31.725735 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:31.725744 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:31.725751 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:31.725758 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:31.725764 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:31 GMT
	I1002 12:02:31.725770 2563543 round_trippers.go:580]     Audit-Id: 746a45fa-0ae4-4396-94e4-33b1a9aba848
	I1002 12:02:31.725777 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:31.725928 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:32.222371 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:32.222398 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:32.222409 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:32.222416 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:32.225260 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:32.225289 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:32.225298 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:32.225305 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:32.225312 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:32.225318 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:32.225328 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:32 GMT
	I1002 12:02:32.225334 2563543 round_trippers.go:580]     Audit-Id: b7301075-c41b-43dc-92ce-d259bcbc19de
	I1002 12:02:32.225629 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:32.722210 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:32.722232 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:32.722249 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:32.722256 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:32.728510 2563543 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 12:02:32.728546 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:32.728555 2563543 round_trippers.go:580]     Audit-Id: fa6db711-d355-4a9e-81aa-73e5b3927303
	I1002 12:02:32.728562 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:32.728568 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:32.728574 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:32.728580 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:32.728586 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:32 GMT
	I1002 12:02:32.728978 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:32.729432 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:33.222153 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:33.222181 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:33.222191 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:33.222199 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:33.225147 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:33.225168 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:33.225177 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:33.225184 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:33.225190 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:33 GMT
	I1002 12:02:33.225196 2563543 round_trippers.go:580]     Audit-Id: a1c811b0-fe9b-472c-b131-51966fc74461
	I1002 12:02:33.225203 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:33.225209 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:33.225302 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:33.723009 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:33.723036 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:33.723046 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:33.723054 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:33.725605 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:33.725627 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:33.725637 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:33.725644 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:33.725650 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:33.725656 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:33 GMT
	I1002 12:02:33.725662 2563543 round_trippers.go:580]     Audit-Id: a11c1d75-a6aa-4fa3-a62d-d124e5a873eb
	I1002 12:02:33.725668 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:33.725826 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:34.222429 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:34.222465 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:34.222476 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:34.222483 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:34.225041 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:34.225065 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:34.225074 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:34.225081 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:34.225087 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:34.225094 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:34.225101 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:34 GMT
	I1002 12:02:34.225107 2563543 round_trippers.go:580]     Audit-Id: 8ed2fd8d-8f79-4cdf-a248-b74e1c72461a
	I1002 12:02:34.225474 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:34.722707 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:34.722731 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:34.722740 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:34.722748 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:34.725515 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:34.725538 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:34.725547 2563543 round_trippers.go:580]     Audit-Id: 30d0fe0f-e923-4a1d-9c12-8eeb795a7007
	I1002 12:02:34.725553 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:34.725561 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:34.725567 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:34.725573 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:34.725580 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:34 GMT
	I1002 12:02:34.725764 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:35.222371 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:35.222398 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:35.222409 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:35.222416 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:35.225073 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:35.225096 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:35.225105 2563543 round_trippers.go:580]     Audit-Id: 0fe85121-6f34-4d5e-8685-afe190199bb8
	I1002 12:02:35.225111 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:35.225117 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:35.225124 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:35.225130 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:35.225138 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:35 GMT
	I1002 12:02:35.225256 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:35.225655 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:35.722764 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:35.722789 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:35.722799 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:35.722806 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:35.725338 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:35.725363 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:35.725373 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:35.725384 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:35.725391 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:35.725400 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:35 GMT
	I1002 12:02:35.725407 2563543 round_trippers.go:580]     Audit-Id: aa4cd4e7-2263-4af0-b6b4-fb853c87e8f7
	I1002 12:02:35.725421 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:35.725744 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:36.222366 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:36.222394 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:36.222404 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:36.222411 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:36.225053 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:36.225080 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:36.225089 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:36.225096 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:36.225103 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:36 GMT
	I1002 12:02:36.225109 2563543 round_trippers.go:580]     Audit-Id: f9bd4982-7e65-46fe-b6bd-71a654b19010
	I1002 12:02:36.225115 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:36.225122 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:36.225350 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:36.722416 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:36.722441 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:36.722452 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:36.722459 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:36.725076 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:36.725100 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:36.725109 2563543 round_trippers.go:580]     Audit-Id: 2289d6ea-897f-44d8-b683-ce86400e6564
	I1002 12:02:36.725115 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:36.725121 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:36.725127 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:36.725133 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:36.725140 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:36 GMT
	I1002 12:02:36.725277 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:37.222424 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:37.222448 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:37.222459 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:37.222467 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:37.225194 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:37.225228 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:37.225238 2563543 round_trippers.go:580]     Audit-Id: 02a1fa2e-5d96-4eb9-ac76-58c2574e2dd6
	I1002 12:02:37.225250 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:37.225257 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:37.225263 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:37.225274 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:37.225284 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:37 GMT
	I1002 12:02:37.225652 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:37.226084 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:37.722157 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:37.722184 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:37.722198 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:37.722206 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:37.724843 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:37.724866 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:37.724875 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:37.724882 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:37.724888 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:37 GMT
	I1002 12:02:37.724895 2563543 round_trippers.go:580]     Audit-Id: e04d1cde-a8f7-41d7-955d-4278dd837991
	I1002 12:02:37.724901 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:37.724907 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:37.725059 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:38.222138 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:38.222164 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:38.222174 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:38.222181 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:38.225052 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:38.225082 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:38.225110 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:38.225119 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:38.225125 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:38 GMT
	I1002 12:02:38.225132 2563543 round_trippers.go:580]     Audit-Id: 0bc36a54-a6b5-444c-8a1f-1cc0ee0365c0
	I1002 12:02:38.225138 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:38.225145 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:38.225272 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:38.722147 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:38.722172 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:38.722186 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:38.722193 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:38.724875 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:38.724899 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:38.724908 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:38.724916 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:38.724922 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:38 GMT
	I1002 12:02:38.724928 2563543 round_trippers.go:580]     Audit-Id: ecbaca2a-c5d7-476d-83d4-5c1a92546841
	I1002 12:02:38.724935 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:38.724941 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:38.725077 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:39.222150 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:39.222174 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:39.222184 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:39.222191 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:39.224836 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:39.224859 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:39.224868 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:39.224875 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:39 GMT
	I1002 12:02:39.224881 2563543 round_trippers.go:580]     Audit-Id: 570e01cf-fd96-4d7c-ab05-ed0493454e27
	I1002 12:02:39.224887 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:39.224893 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:39.224899 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:39.225026 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:39.722962 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:39.722989 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:39.722999 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:39.723008 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:39.725620 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:39.725644 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:39.725654 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:39.725662 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:39.725668 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:39.725675 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:39.725682 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:39 GMT
	I1002 12:02:39.725688 2563543 round_trippers.go:580]     Audit-Id: 03030fa2-7395-4162-97da-9fca6dbdcf23
	I1002 12:02:39.725816 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:39.726221 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:40.223017 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:40.223041 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:40.223052 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:40.223060 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:40.228274 2563543 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 12:02:40.228297 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:40.228306 2563543 round_trippers.go:580]     Audit-Id: 51736959-5749-4480-bb6b-329ec4091b44
	I1002 12:02:40.228312 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:40.228318 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:40.228324 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:40.228339 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:40.228346 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:40 GMT
	I1002 12:02:40.228855 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:40.723027 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:40.723051 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:40.723061 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:40.723069 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:40.725651 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:40.725678 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:40.725687 2563543 round_trippers.go:580]     Audit-Id: 9ea83ae8-a28f-4859-9f62-f82067e7fad8
	I1002 12:02:40.725694 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:40.725700 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:40.725706 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:40.725712 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:40.725719 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:40 GMT
	I1002 12:02:40.726023 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:41.222726 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:41.222755 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:41.222769 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:41.222777 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:41.225496 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:41.225524 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:41.225534 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:41.225541 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:41.225547 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:41.225553 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:41.225560 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:41 GMT
	I1002 12:02:41.225571 2563543 round_trippers.go:580]     Audit-Id: 512ea367-3668-4eea-9761-10f086427438
	I1002 12:02:41.225778 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:41.722954 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:41.722979 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:41.722989 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:41.722996 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:41.725512 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:41.725537 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:41.725546 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:41.725553 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:41.725559 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:41.725565 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:41.725571 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:41 GMT
	I1002 12:02:41.725578 2563543 round_trippers.go:580]     Audit-Id: 1bbc198a-6149-45ac-9587-b08f8ea2639e
	I1002 12:02:41.725690 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:42.222046 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:42.222070 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:42.222080 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:42.222088 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:42.225615 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:42.225647 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:42.225656 2563543 round_trippers.go:580]     Audit-Id: 71f371a8-00ee-48b1-b45a-22d72790ad2d
	I1002 12:02:42.225663 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:42.225669 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:42.225676 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:42.225684 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:42.225692 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:42 GMT
	I1002 12:02:42.225819 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:42.226289 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:42.722125 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:42.722160 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:42.722177 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:42.722185 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:42.724915 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:42.724954 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:42.724963 2563543 round_trippers.go:580]     Audit-Id: 409f6fc7-01f6-495e-a25d-d24e02e716a5
	I1002 12:02:42.724969 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:42.724975 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:42.724981 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:42.724988 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:42.724994 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:42 GMT
	I1002 12:02:42.725133 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:43.222209 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:43.222236 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:43.222246 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:43.222254 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:43.225086 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:43.225119 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:43.225129 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:43.225135 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:43.225144 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:43 GMT
	I1002 12:02:43.225150 2563543 round_trippers.go:580]     Audit-Id: a93d0cf8-749a-4d6d-bcc4-9443532258d8
	I1002 12:02:43.225157 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:43.225166 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:43.225447 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:43.722195 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:43.722224 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:43.722234 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:43.722244 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:43.725244 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:43.725273 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:43.725283 2563543 round_trippers.go:580]     Audit-Id: a539a1cc-d5ce-4217-911b-0188cbd017ee
	I1002 12:02:43.725291 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:43.725301 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:43.725308 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:43.725314 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:43.725325 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:43 GMT
	I1002 12:02:43.726920 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:44.222536 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:44.222562 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:44.222574 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:44.222582 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:44.225122 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:44.225150 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:44.225159 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:44 GMT
	I1002 12:02:44.225166 2563543 round_trippers.go:580]     Audit-Id: ab7c5d70-0e78-4441-80a0-c7c60b05c368
	I1002 12:02:44.225173 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:44.225179 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:44.225186 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:44.225192 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:44.225430 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:44.722164 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:44.722213 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:44.722223 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:44.722230 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:44.725105 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:44.725195 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:44.725214 2563543 round_trippers.go:580]     Audit-Id: 00772a5e-5610-4672-b9f9-3e6d95be83f6
	I1002 12:02:44.725222 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:44.725228 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:44.725248 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:44.725260 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:44.725267 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:44 GMT
	I1002 12:02:44.725396 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:44.725810 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:45.223011 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:45.223035 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:45.223045 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:45.223053 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:45.226966 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:45.226999 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:45.227009 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:45.227017 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:45 GMT
	I1002 12:02:45.227023 2563543 round_trippers.go:580]     Audit-Id: f8de46ce-5d63-4b35-acc8-45ce3fcd2c0a
	I1002 12:02:45.227029 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:45.227035 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:45.227042 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:45.227155 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:45.723052 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:45.723078 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:45.723089 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:45.723097 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:45.725836 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:45.725869 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:45.725879 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:45.725885 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:45.725893 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:45.725900 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:45 GMT
	I1002 12:02:45.725906 2563543 round_trippers.go:580]     Audit-Id: b821f77d-a750-4fd6-b099-13046c49b286
	I1002 12:02:45.725925 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:45.726368 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:46.223080 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:46.223107 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:46.223117 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:46.223125 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:46.225786 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:46.225817 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:46.225828 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:46.225835 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:46.225841 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:46 GMT
	I1002 12:02:46.225849 2563543 round_trippers.go:580]     Audit-Id: b516dd9f-4564-4925-b52b-61ef85c24945
	I1002 12:02:46.225855 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:46.225861 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:46.225990 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:46.722108 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:46.722137 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:46.722147 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:46.722160 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:46.724702 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:46.724729 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:46.724739 2563543 round_trippers.go:580]     Audit-Id: 9afcc8b1-a2a0-47df-a963-d74ce3cd6d11
	I1002 12:02:46.724754 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:46.724762 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:46.724768 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:46.724782 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:46.724789 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:46 GMT
	I1002 12:02:46.725114 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:47.222833 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:47.222860 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:47.222869 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:47.222876 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:47.225625 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:47.225653 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:47.225662 2563543 round_trippers.go:580]     Audit-Id: fa3a3d00-fef0-483e-9998-699d8d113e90
	I1002 12:02:47.225669 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:47.225676 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:47.225682 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:47.225688 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:47.225695 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:47 GMT
	I1002 12:02:47.225936 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:47.226427 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:47.722556 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:47.722582 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:47.722593 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:47.722601 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:47.725321 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:47.725349 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:47.725360 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:47.725366 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:47.725373 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:47.725382 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:47 GMT
	I1002 12:02:47.725388 2563543 round_trippers.go:580]     Audit-Id: 70c53fb3-eaa2-47b7-b192-ba6d0f6ab30a
	I1002 12:02:47.725395 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:47.725536 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:48.222822 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:48.222900 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:48.222930 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:48.222954 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:48.225604 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:48.225637 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:48.225646 2563543 round_trippers.go:580]     Audit-Id: a23b245c-7162-45a0-abc1-4291a0f09eef
	I1002 12:02:48.225652 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:48.225659 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:48.225665 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:48.225689 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:48.225696 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:48 GMT
	I1002 12:02:48.225817 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:48.722361 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:48.722389 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:48.722399 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:48.722410 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:48.725244 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:48.725273 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:48.725282 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:48.725289 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:48.725309 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:48 GMT
	I1002 12:02:48.725315 2563543 round_trippers.go:580]     Audit-Id: e40beb7e-6cb2-408a-835c-66038504fbe2
	I1002 12:02:48.725322 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:48.725328 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:48.725749 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:49.222399 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:49.222425 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:49.222436 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:49.222444 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:49.225947 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:49.225971 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:49.225980 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:49.225987 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:49 GMT
	I1002 12:02:49.225993 2563543 round_trippers.go:580]     Audit-Id: c138b508-6c8d-4be1-8168-dc797a9411c1
	I1002 12:02:49.225999 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:49.226005 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:49.226011 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:49.226134 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:49.226634 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:49.722976 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:49.723013 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:49.723023 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:49.723034 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:49.725899 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:49.725921 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:49.725933 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:49.725940 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:49.725946 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:49.725952 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:49.725958 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:49 GMT
	I1002 12:02:49.725965 2563543 round_trippers.go:580]     Audit-Id: 69d5b59b-0aa2-4523-8360-a93a750d3d64
	I1002 12:02:49.726114 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:50.222846 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:50.222871 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:50.222881 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:50.222889 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:50.225430 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:50.225452 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:50.225461 2563543 round_trippers.go:580]     Audit-Id: 584c6ef6-1716-4a62-8eb5-aa02f6e29724
	I1002 12:02:50.225467 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:50.225475 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:50.225481 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:50.225487 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:50.225494 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:50 GMT
	I1002 12:02:50.225600 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:50.722715 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:50.722740 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:50.722750 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:50.722757 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:50.725319 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:50.725348 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:50.725357 2563543 round_trippers.go:580]     Audit-Id: 0958da5c-6a67-42d5-8ec2-89f641888b02
	I1002 12:02:50.725364 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:50.725370 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:50.725376 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:50.725382 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:50.725389 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:50 GMT
	I1002 12:02:50.725533 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:51.222702 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:51.222728 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:51.222737 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:51.222744 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:51.225419 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:51.225447 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:51.225457 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:51.225463 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:51.225469 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:51.225477 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:51 GMT
	I1002 12:02:51.225483 2563543 round_trippers.go:580]     Audit-Id: 2a8212fa-dc38-4790-aa5e-d5370cf30f1b
	I1002 12:02:51.225494 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:51.225599 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:51.722770 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:51.722794 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:51.722803 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:51.722811 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:51.725344 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:51.725372 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:51.725380 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:51.725387 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:51 GMT
	I1002 12:02:51.725394 2563543 round_trippers.go:580]     Audit-Id: dbdc7730-99b7-4e9c-b536-bad7c686eda8
	I1002 12:02:51.725400 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:51.725410 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:51.725416 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:51.725747 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:51.726161 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:52.222703 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:52.222730 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:52.222740 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:52.222748 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:52.225447 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:52.225470 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:52.225479 2563543 round_trippers.go:580]     Audit-Id: 68219bcd-7671-417d-a64d-e0ab54437860
	I1002 12:02:52.225486 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:52.225524 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:52.225531 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:52.225537 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:52.225543 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:52 GMT
	I1002 12:02:52.225672 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:52.722616 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:52.722641 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:52.722650 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:52.722657 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:52.725251 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:52.725276 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:52.725284 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:52 GMT
	I1002 12:02:52.725291 2563543 round_trippers.go:580]     Audit-Id: 1a2bd257-6632-450f-bc61-b68505291d1f
	I1002 12:02:52.725297 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:52.725303 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:52.725309 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:52.725316 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:52.725449 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:53.222285 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:53.222313 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:53.222324 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:53.222331 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:53.225064 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:53.225095 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:53.225105 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:53.225112 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:53.225120 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:53.225126 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:53 GMT
	I1002 12:02:53.225132 2563543 round_trippers.go:580]     Audit-Id: 198ebcb8-a596-4356-95d8-ff93d739c66d
	I1002 12:02:53.225141 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:53.225265 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:53.722381 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:53.722407 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:53.722418 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:53.722425 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:53.725095 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:53.725126 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:53.725135 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:53.725142 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:53.725149 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:53.725156 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:53 GMT
	I1002 12:02:53.725162 2563543 round_trippers.go:580]     Audit-Id: f6cef638-c7bd-4528-a116-6e6f9a84ea61
	I1002 12:02:53.725168 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:53.725533 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:54.222149 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:54.222177 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:54.222188 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:54.222196 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:54.225020 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:54.225042 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:54.225051 2563543 round_trippers.go:580]     Audit-Id: 3ca112f0-cc46-4177-9248-dbef33c5b3a3
	I1002 12:02:54.225057 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:54.225063 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:54.225070 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:54.225076 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:54.225082 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:54 GMT
	I1002 12:02:54.225214 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:54.225626 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:54.722337 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:54.722370 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:54.722383 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:54.722390 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:54.725267 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:54.725290 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:54.725299 2563543 round_trippers.go:580]     Audit-Id: 7cc4166a-6787-4408-b20c-4cca30ba4885
	I1002 12:02:54.725305 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:54.725311 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:54.725317 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:54.725323 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:54.725330 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:54 GMT
	I1002 12:02:54.725445 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:55.222790 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:55.222814 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:55.222824 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:55.222831 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:55.225374 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:55.225403 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:55.225412 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:55 GMT
	I1002 12:02:55.225421 2563543 round_trippers.go:580]     Audit-Id: 7aec2b1c-ab98-446f-a5bb-6531abc0cb47
	I1002 12:02:55.225427 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:55.225433 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:55.225442 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:55.225449 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:55.225784 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:55.722680 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:55.722707 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:55.722718 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:55.722725 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:55.725212 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:55.725234 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:55.725242 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:55.725248 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:55.725255 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:55.725261 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:55.725267 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:55 GMT
	I1002 12:02:55.725273 2563543 round_trippers.go:580]     Audit-Id: be13f630-c25a-4c07-a62c-bb76c343f0f2
	I1002 12:02:55.725458 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:56.222119 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:56.222144 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:56.222154 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:56.222161 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:56.224925 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:56.224951 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:56.224960 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:56.224967 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:56.224973 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:56.224979 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:56 GMT
	I1002 12:02:56.224985 2563543 round_trippers.go:580]     Audit-Id: cd69ef27-7f2b-4392-b660-c9e78ba65f34
	I1002 12:02:56.224992 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:56.225101 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:56.722117 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:56.722142 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:56.722152 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:56.722159 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:56.724691 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:56.724721 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:56.724731 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:56 GMT
	I1002 12:02:56.724738 2563543 round_trippers.go:580]     Audit-Id: 3f87ab17-702e-4288-bbae-0a92aeada2f8
	I1002 12:02:56.724744 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:56.724751 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:56.724757 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:56.724772 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:56.725171 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"314","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 12:02:56.725571 2563543 node_ready.go:58] node "multinode-361100" has status "Ready":"False"
	I1002 12:02:57.223084 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:57.223111 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.223121 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.223128 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.225983 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.226007 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.226015 2563543 round_trippers.go:580]     Audit-Id: f83b26ca-a335-478b-a82a-53720b0fb4a4
	I1002 12:02:57.226022 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.226028 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.226035 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.226041 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.226047 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.226216 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:57.226674 2563543 node_ready.go:49] node "multinode-361100" has status "Ready":"True"
	I1002 12:02:57.226694 2563543 node_ready.go:38] duration metric: took 31.591496535s waiting for node "multinode-361100" to be "Ready" ...
	I1002 12:02:57.226706 2563543 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:02:57.226778 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:02:57.226790 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.226800 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.226810 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.230715 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:57.230747 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.230756 2563543 round_trippers.go:580]     Audit-Id: 8ded8142-ffd2-4fea-9d81-abff41f08c34
	I1002 12:02:57.230762 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.230769 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.230775 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.230781 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.230787 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.231179 2563543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"407","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1002 12:02:57.235167 2563543 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:57.235263 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t8gwn
	I1002 12:02:57.235278 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.235287 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.235294 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.238227 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.238251 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.238259 2563543 round_trippers.go:580]     Audit-Id: ec9d9ed0-c2f2-4bba-b452-aabb170e854e
	I1002 12:02:57.238266 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.238273 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.238279 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.238285 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.238292 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.238530 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"407","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 12:02:57.239183 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:57.239204 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.239214 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.239222 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.241850 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.241882 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.241891 2563543 round_trippers.go:580]     Audit-Id: bf1845fb-060c-487b-8883-638fe03a03b8
	I1002 12:02:57.241898 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.241905 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.241912 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.241919 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.241925 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.242331 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:57.242797 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t8gwn
	I1002 12:02:57.242814 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.242916 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.242932 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.245572 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.245631 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.245652 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.245676 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.245713 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.245740 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.245762 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.245784 2563543 round_trippers.go:580]     Audit-Id: 23f5bc1b-3588-4cd1-81c5-0c8803582cbd
	I1002 12:02:57.245993 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"407","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 12:02:57.246669 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:57.246691 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.246701 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.246709 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.249401 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.249473 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.249496 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.249509 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.249516 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.249522 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.249542 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.249554 2563543 round_trippers.go:580]     Audit-Id: 2662e34a-df2c-462d-bd08-8f9330cac4e3
	I1002 12:02:57.249708 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:57.750940 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t8gwn
	I1002 12:02:57.750979 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.750993 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.751003 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.754342 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:57.754382 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.754392 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.754399 2563543 round_trippers.go:580]     Audit-Id: 3cd2d304-d828-41a4-baec-725e5bedc09e
	I1002 12:02:57.754406 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.754413 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.754419 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.754429 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.754730 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"407","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 12:02:57.755253 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:57.755269 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:57.755278 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:57.755285 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:57.757864 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:57.757928 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:57.757951 2563543 round_trippers.go:580]     Audit-Id: f5fa22e2-62d7-4a00-b799-295232e54a9a
	I1002 12:02:57.757973 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:57.758012 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:57.758038 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:57.758060 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:57.758097 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:57 GMT
	I1002 12:02:57.758304 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.250461 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t8gwn
	I1002 12:02:58.250488 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.250498 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.250505 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.253166 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.253190 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.253199 2563543 round_trippers.go:580]     Audit-Id: 2ac913d6-7de0-466f-83f9-c1c192119c7c
	I1002 12:02:58.253205 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.253212 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.253218 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.253224 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.253230 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.253438 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"418","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1002 12:02:58.254062 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.254078 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.254095 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.254102 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.262899 2563543 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1002 12:02:58.262922 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.262931 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.262937 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.262944 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.262950 2563543 round_trippers.go:580]     Audit-Id: 749b50df-d1e2-4bff-984d-4b82d7ab1fbc
	I1002 12:02:58.262956 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.262962 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.263101 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.263519 2563543 pod_ready.go:92] pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:58.263539 2563543 pod_ready.go:81] duration metric: took 1.028340959s waiting for pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.263551 2563543 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.263641 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-361100
	I1002 12:02:58.263653 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.263661 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.263678 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.266808 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:58.266876 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.266899 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.266922 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.266947 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.266954 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.266960 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.266967 2563543 round_trippers.go:580]     Audit-Id: bbcf6687-5265-45ce-be5e-518c81d0eb3c
	I1002 12:02:58.267097 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-361100","namespace":"kube-system","uid":"1591bfe3-2d80-4139-90be-0848d69c2065","resourceVersion":"390","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f946def84b54153bbd407a93c6520aa6","kubernetes.io/config.mirror":"f946def84b54153bbd407a93c6520aa6","kubernetes.io/config.seen":"2023-10-02T12:02:12.749628660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1002 12:02:58.267562 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.267578 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.267587 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.267594 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.270056 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.270090 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.270100 2563543 round_trippers.go:580]     Audit-Id: 3622e06e-c0c8-471d-9b13-ff15548b8639
	I1002 12:02:58.270106 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.270117 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.270123 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.270130 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.270136 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.270293 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.270694 2563543 pod_ready.go:92] pod "etcd-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:58.270711 2563543 pod_ready.go:81] duration metric: took 7.147839ms waiting for pod "etcd-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.270726 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.270791 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-361100
	I1002 12:02:58.270802 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.270813 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.270821 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.273367 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.273435 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.273449 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.273456 2563543 round_trippers.go:580]     Audit-Id: 54e852a4-8eff-4ab3-88b6-0eec8a0e2578
	I1002 12:02:58.273463 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.273469 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.273476 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.273484 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.273679 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-361100","namespace":"kube-system","uid":"2e175694-616d-4084-8747-9c93a50196fe","resourceVersion":"391","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e8c5df593d111e0d872f6b8579c917b6","kubernetes.io/config.mirror":"e8c5df593d111e0d872f6b8579c917b6","kubernetes.io/config.seen":"2023-10-02T12:02:12.749634961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1002 12:02:58.274246 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.274260 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.274271 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.274284 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.276673 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.276723 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.276734 2563543 round_trippers.go:580]     Audit-Id: c6c854ed-b836-4a4c-8382-646fdbc8efd4
	I1002 12:02:58.276743 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.276752 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.276758 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.276767 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.276778 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.277114 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.277517 2563543 pod_ready.go:92] pod "kube-apiserver-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:58.277533 2563543 pod_ready.go:81] duration metric: took 6.798719ms waiting for pod "kube-apiserver-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.277545 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.277645 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-361100
	I1002 12:02:58.277657 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.277666 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.277673 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.280175 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.280242 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.280267 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.280289 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.280326 2563543 round_trippers.go:580]     Audit-Id: 70959dc1-67af-490d-b774-38c4a2b4b49b
	I1002 12:02:58.280352 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.280374 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.280412 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.280728 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-361100","namespace":"kube-system","uid":"6333c350-8aae-41f5-b761-b1c0e8bb58c8","resourceVersion":"392","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"033d17b0dc058901bd6cd65357fc9f2b","kubernetes.io/config.mirror":"033d17b0dc058901bd6cd65357fc9f2b","kubernetes.io/config.seen":"2023-10-02T12:02:12.749636520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1002 12:02:58.423601 2563543 request.go:629] Waited for 142.303257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.423730 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.423740 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.423749 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.423757 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.426376 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.426402 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.426410 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.426417 2563543 round_trippers.go:580]     Audit-Id: 4d0e2d04-d050-4cc8-9114-0e193d7a9518
	I1002 12:02:58.426424 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.426430 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.426437 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.426446 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.426813 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.427214 2563543 pod_ready.go:92] pod "kube-controller-manager-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:58.427233 2563543 pod_ready.go:81] duration metric: took 149.679552ms waiting for pod "kube-controller-manager-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.427248 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gfcj" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.623711 2563543 request.go:629] Waited for 196.370681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gfcj
	I1002 12:02:58.623773 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gfcj
	I1002 12:02:58.623780 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.623795 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.623803 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.626711 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.626826 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.626853 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.626891 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.626916 2563543 round_trippers.go:580]     Audit-Id: 333689bf-a93a-41b0-ad8d-c18dbad3530c
	I1002 12:02:58.626936 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.626972 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.626995 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.636061 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gfcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"356394df-71a5-4114-99de-9d594ec624ca","resourceVersion":"383","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"191a9eb9-84d3-454f-b72a-9b074e5abff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"191a9eb9-84d3-454f-b72a-9b074e5abff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1002 12:02:58.823218 2563543 request.go:629] Waited for 186.453041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.823284 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:58.823295 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:58.823304 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:58.823314 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:58.826065 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:58.826134 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:58.826156 2563543 round_trippers.go:580]     Audit-Id: 88427480-9180-407f-8af4-29dd592c6609
	I1002 12:02:58.826169 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:58.826176 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:58.826182 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:58.826189 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:58.826197 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:58 GMT
	I1002 12:02:58.826326 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:58.826753 2563543 pod_ready.go:92] pod "kube-proxy-6gfcj" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:58.826770 2563543 pod_ready.go:81] duration metric: took 399.512776ms waiting for pod "kube-proxy-6gfcj" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:58.826782 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:59.023126 2563543 request.go:629] Waited for 196.27985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-361100
	I1002 12:02:59.023235 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-361100
	I1002 12:02:59.023276 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.023304 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.023326 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.026488 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:59.026514 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.026524 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.026531 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.026537 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.026543 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.026549 2563543 round_trippers.go:580]     Audit-Id: 16e1de03-37fa-4787-8e95-03dc67a06869
	I1002 12:02:59.026555 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.026699 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-361100","namespace":"kube-system","uid":"bc8f5f7c-fd2d-4ec5-b3b9-ecd4abfb06f7","resourceVersion":"389","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4b7655de39be7fe33bd9544841473644","kubernetes.io/config.mirror":"4b7655de39be7fe33bd9544841473644","kubernetes.io/config.seen":"2023-10-02T12:02:12.749659667Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1002 12:02:59.223509 2563543 request.go:629] Waited for 196.354993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:59.223570 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:02:59.223575 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.223585 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.223596 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.226401 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:02:59.226465 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.226487 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.226522 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.226551 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.226589 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.226609 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.226636 2563543 round_trippers.go:580]     Audit-Id: e28204ee-59e9-46b8-8f90-06898e5aad77
	I1002 12:02:59.226875 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:02:59.227314 2563543 pod_ready.go:92] pod "kube-scheduler-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:02:59.227335 2563543 pod_ready.go:81] duration metric: took 400.545835ms waiting for pod "kube-scheduler-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:02:59.227349 2563543 pod_ready.go:38] duration metric: took 2.000626158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:02:59.227385 2563543 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:02:59.227457 2563543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:02:59.241751 2563543 command_runner.go:130] > 1229
	I1002 12:02:59.241787 2563543 api_server.go:72] duration metric: took 33.669693259s to wait for apiserver process to appear ...
	I1002 12:02:59.241832 2563543 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:02:59.241849 2563543 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 12:02:59.252872 2563543 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 12:02:59.252950 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 12:02:59.252963 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.252973 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.252984 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.254670 2563543 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 12:02:59.254705 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.254714 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.254721 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.254729 2563543 round_trippers.go:580]     Content-Length: 263
	I1002 12:02:59.254735 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.254741 2563543 round_trippers.go:580]     Audit-Id: cde55e3a-0b4b-42d2-a92d-47f4cd023c20
	I1002 12:02:59.254747 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.254759 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.254778 2563543 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 12:02:59.254869 2563543 api_server.go:141] control plane version: v1.28.2
	I1002 12:02:59.254891 2563543 api_server.go:131] duration metric: took 13.051986ms to wait for apiserver health ...
	I1002 12:02:59.254899 2563543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:02:59.423174 2563543 request.go:629] Waited for 168.20704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:02:59.423253 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:02:59.423263 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.423272 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.423279 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.426813 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:59.426841 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.426851 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.426858 2563543 round_trippers.go:580]     Audit-Id: ecce1635-8687-4fee-828f-7dd358bb0215
	I1002 12:02:59.426864 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.426871 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.426900 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.426910 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.427773 2563543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"418","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1002 12:02:59.430279 2563543 system_pods.go:59] 8 kube-system pods found
	I1002 12:02:59.430316 2563543 system_pods.go:61] "coredns-5dd5756b68-t8gwn" [43f1fa61-5afd-4b63-abf4-f27325b4e897] Running
	I1002 12:02:59.430323 2563543 system_pods.go:61] "etcd-multinode-361100" [1591bfe3-2d80-4139-90be-0848d69c2065] Running
	I1002 12:02:59.430328 2563543 system_pods.go:61] "kindnet-2lbdw" [bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37] Running
	I1002 12:02:59.430335 2563543 system_pods.go:61] "kube-apiserver-multinode-361100" [2e175694-616d-4084-8747-9c93a50196fe] Running
	I1002 12:02:59.430352 2563543 system_pods.go:61] "kube-controller-manager-multinode-361100" [6333c350-8aae-41f5-b761-b1c0e8bb58c8] Running
	I1002 12:02:59.430357 2563543 system_pods.go:61] "kube-proxy-6gfcj" [356394df-71a5-4114-99de-9d594ec624ca] Running
	I1002 12:02:59.430363 2563543 system_pods.go:61] "kube-scheduler-multinode-361100" [bc8f5f7c-fd2d-4ec5-b3b9-ecd4abfb06f7] Running
	I1002 12:02:59.430370 2563543 system_pods.go:61] "storage-provisioner" [5dde50ec-2225-41ca-adeb-ceff5d1717b9] Running
	I1002 12:02:59.430376 2563543 system_pods.go:74] duration metric: took 175.47291ms to wait for pod list to return data ...
	I1002 12:02:59.430388 2563543 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:02:59.623833 2563543 request.go:629] Waited for 193.366329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 12:02:59.623919 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 12:02:59.623925 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.623934 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.623941 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.627753 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:59.627780 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.627793 2563543 round_trippers.go:580]     Audit-Id: 862f0192-4b82-4e49-af9c-54a17ea5f73d
	I1002 12:02:59.627800 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.627807 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.627814 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.627821 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.627827 2563543 round_trippers.go:580]     Content-Length: 261
	I1002 12:02:59.627833 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.628052 2563543 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c1f4b210-c5b2-487f-bcda-76321aeaec3f","resourceVersion":"302","creationTimestamp":"2023-10-02T12:02:24Z"}}]}
	I1002 12:02:59.628311 2563543 default_sa.go:45] found service account: "default"
	I1002 12:02:59.628328 2563543 default_sa.go:55] duration metric: took 197.934088ms for default service account to be created ...
	I1002 12:02:59.628337 2563543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:02:59.823775 2563543 request.go:629] Waited for 195.370491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:02:59.823897 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:02:59.823910 2563543 round_trippers.go:469] Request Headers:
	I1002 12:02:59.823920 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:02:59.823928 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:02:59.827753 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:02:59.827829 2563543 round_trippers.go:577] Response Headers:
	I1002 12:02:59.827852 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:02:59.827873 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:02:59.827907 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:02:59.827938 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:02:59.827951 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:02:59 GMT
	I1002 12:02:59.827958 2563543 round_trippers.go:580]     Audit-Id: 385ba0c5-2aff-483e-9580-e4b997d338d3
	I1002 12:02:59.828442 2563543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"418","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1002 12:02:59.830841 2563543 system_pods.go:86] 8 kube-system pods found
	I1002 12:02:59.830870 2563543 system_pods.go:89] "coredns-5dd5756b68-t8gwn" [43f1fa61-5afd-4b63-abf4-f27325b4e897] Running
	I1002 12:02:59.830877 2563543 system_pods.go:89] "etcd-multinode-361100" [1591bfe3-2d80-4139-90be-0848d69c2065] Running
	I1002 12:02:59.830883 2563543 system_pods.go:89] "kindnet-2lbdw" [bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37] Running
	I1002 12:02:59.830888 2563543 system_pods.go:89] "kube-apiserver-multinode-361100" [2e175694-616d-4084-8747-9c93a50196fe] Running
	I1002 12:02:59.830895 2563543 system_pods.go:89] "kube-controller-manager-multinode-361100" [6333c350-8aae-41f5-b761-b1c0e8bb58c8] Running
	I1002 12:02:59.830900 2563543 system_pods.go:89] "kube-proxy-6gfcj" [356394df-71a5-4114-99de-9d594ec624ca] Running
	I1002 12:02:59.830905 2563543 system_pods.go:89] "kube-scheduler-multinode-361100" [bc8f5f7c-fd2d-4ec5-b3b9-ecd4abfb06f7] Running
	I1002 12:02:59.830910 2563543 system_pods.go:89] "storage-provisioner" [5dde50ec-2225-41ca-adeb-ceff5d1717b9] Running
	I1002 12:02:59.830917 2563543 system_pods.go:126] duration metric: took 202.574684ms to wait for k8s-apps to be running ...
	I1002 12:02:59.830929 2563543 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:02:59.830992 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:02:59.851239 2563543 system_svc.go:56] duration metric: took 20.300371ms WaitForService to wait for kubelet.
	I1002 12:02:59.851269 2563543 kubeadm.go:581] duration metric: took 34.279175475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:02:59.851291 2563543 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:03:00.024156 2563543 request.go:629] Waited for 172.733945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 12:03:00.024280 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 12:03:00.024316 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:00.024343 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:00.024364 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:00.040483 2563543 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1002 12:03:00.040513 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:00.040545 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:00.040552 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:00 GMT
	I1002 12:03:00.040559 2563543 round_trippers.go:580]     Audit-Id: 3d1ad029-2c9d-4296-bda7-4739ed8dcd92
	I1002 12:03:00.040565 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:00.040572 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:00.040578 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:00.040736 2563543 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1002 12:03:00.041243 2563543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 12:03:00.041264 2563543 node_conditions.go:123] node cpu capacity is 2
	I1002 12:03:00.041276 2563543 node_conditions.go:105] duration metric: took 189.970338ms to run NodePressure ...
	I1002 12:03:00.041289 2563543 start.go:228] waiting for startup goroutines ...
	I1002 12:03:00.041296 2563543 start.go:233] waiting for cluster config update ...
	I1002 12:03:00.041308 2563543 start.go:242] writing updated cluster config ...
	I1002 12:03:00.049335 2563543 out.go:177] 
	I1002 12:03:00.052264 2563543 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:03:00.052451 2563543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json ...
	I1002 12:03:00.057050 2563543 out.go:177] * Starting worker node multinode-361100-m02 in cluster multinode-361100
	I1002 12:03:00.058879 2563543 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:03:00.062427 2563543 out.go:177] * Pulling base image ...
	I1002 12:03:00.071264 2563543 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:03:00.071323 2563543 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 12:03:00.071365 2563543 cache.go:57] Caching tarball of preloaded images
	I1002 12:03:00.071519 2563543 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 12:03:00.071532 2563543 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:03:00.071666 2563543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json ...
	I1002 12:03:00.160021 2563543 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 12:03:00.160048 2563543 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 12:03:00.160069 2563543 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:03:00.160122 2563543 start.go:365] acquiring machines lock for multinode-361100-m02: {Name:mk743e214398d81dcca53d73578869b06d56fa7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:03:00.160267 2563543 start.go:369] acquired machines lock for "multinode-361100-m02" in 123.57µs
	I1002 12:03:00.160297 2563543 start.go:93] Provisioning new machine with config: &{Name:multinode-361100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 12:03:00.160433 2563543 start.go:125] createHost starting for "m02" (driver="docker")
	I1002 12:03:00.176379 2563543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 12:03:00.176574 2563543 start.go:159] libmachine.API.Create for "multinode-361100" (driver="docker")
	I1002 12:03:00.176610 2563543 client.go:168] LocalClient.Create starting
	I1002 12:03:00.176697 2563543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem
	I1002 12:03:00.176736 2563543 main.go:141] libmachine: Decoding PEM data...
	I1002 12:03:00.176752 2563543 main.go:141] libmachine: Parsing certificate...
	I1002 12:03:00.176811 2563543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem
	I1002 12:03:00.176828 2563543 main.go:141] libmachine: Decoding PEM data...
	I1002 12:03:00.176839 2563543 main.go:141] libmachine: Parsing certificate...
	I1002 12:03:00.177150 2563543 cli_runner.go:164] Run: docker network inspect multinode-361100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:03:00.278993 2563543 network_create.go:76] Found existing network {name:multinode-361100 subnet:0x4001165170 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1002 12:03:00.279037 2563543 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-361100-m02" container
	I1002 12:03:00.279126 2563543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 12:03:00.302200 2563543 cli_runner.go:164] Run: docker volume create multinode-361100-m02 --label name.minikube.sigs.k8s.io=multinode-361100-m02 --label created_by.minikube.sigs.k8s.io=true
	I1002 12:03:00.329340 2563543 oci.go:103] Successfully created a docker volume multinode-361100-m02
	I1002 12:03:00.329439 2563543 cli_runner.go:164] Run: docker run --rm --name multinode-361100-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-361100-m02 --entrypoint /usr/bin/test -v multinode-361100-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 12:03:00.958193 2563543 oci.go:107] Successfully prepared a docker volume multinode-361100-m02
	I1002 12:03:00.958247 2563543 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:03:00.958269 2563543 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 12:03:00.958359 2563543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-361100-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 12:03:05.245374 2563543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-361100-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.286970732s)
	I1002 12:03:05.245411 2563543 kic.go:199] duration metric: took 4.287137 seconds to extract preloaded images to volume
	W1002 12:03:05.245572 2563543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 12:03:05.245682 2563543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 12:03:05.339679 2563543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-361100-m02 --name multinode-361100-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-361100-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-361100-m02 --network multinode-361100 --ip 192.168.58.3 --volume multinode-361100-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 12:03:05.719805 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Running}}
	I1002 12:03:05.741037 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Status}}
	I1002 12:03:05.771983 2563543 cli_runner.go:164] Run: docker exec multinode-361100-m02 stat /var/lib/dpkg/alternatives/iptables
	I1002 12:03:05.870935 2563543 oci.go:144] the created container "multinode-361100-m02" has a running status.
	I1002 12:03:05.870965 2563543 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa...
	I1002 12:03:06.700397 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 12:03:06.700489 2563543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 12:03:06.735192 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Status}}
	I1002 12:03:06.766194 2563543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 12:03:06.766214 2563543 kic_runner.go:114] Args: [docker exec --privileged multinode-361100-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 12:03:06.869057 2563543 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Status}}
	I1002 12:03:06.914512 2563543 machine.go:88] provisioning docker machine ...
	I1002 12:03:06.914541 2563543 ubuntu.go:169] provisioning hostname "multinode-361100-m02"
	I1002 12:03:06.914608 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:06.947163 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:03:06.947657 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35952 <nil> <nil>}
	I1002 12:03:06.947683 2563543 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-361100-m02 && echo "multinode-361100-m02" | sudo tee /etc/hostname
	I1002 12:03:07.132473 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-361100-m02
	
	I1002 12:03:07.132579 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:07.161032 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:03:07.161447 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35952 <nil> <nil>}
	I1002 12:03:07.161485 2563543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-361100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-361100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-361100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:03:07.317139 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:03:07.317210 2563543 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:03:07.317245 2563543 ubuntu.go:177] setting up certificates
	I1002 12:03:07.317283 2563543 provision.go:83] configureAuth start
	I1002 12:03:07.317383 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100-m02
	I1002 12:03:07.342795 2563543 provision.go:138] copyHostCerts
	I1002 12:03:07.342837 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:03:07.342871 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:03:07.342878 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:03:07.342959 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:03:07.343118 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:03:07.343143 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:03:07.343148 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:03:07.343189 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:03:07.343241 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:03:07.343257 2563543 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:03:07.343264 2563543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:03:07.343289 2563543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:03:07.343629 2563543 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.multinode-361100-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-361100-m02]
	I1002 12:03:07.586364 2563543 provision.go:172] copyRemoteCerts
	I1002 12:03:07.586477 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:03:07.586537 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:07.606397 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:03:07.707937 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 12:03:07.708002 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 12:03:07.738599 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 12:03:07.738662 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:03:07.769498 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 12:03:07.769564 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 12:03:07.806343 2563543 provision.go:86] duration metric: configureAuth took 489.02348ms
	I1002 12:03:07.806377 2563543 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:03:07.806586 2563543 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:03:07.806687 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:07.826678 2563543 main.go:141] libmachine: Using SSH client type: native
	I1002 12:03:07.827255 2563543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35952 <nil> <nil>}
	I1002 12:03:07.827276 2563543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:03:08.103108 2563543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:03:08.103179 2563543 machine.go:91] provisioned docker machine in 1.188646911s
	I1002 12:03:08.103204 2563543 client.go:171] LocalClient.Create took 7.926588419s
	I1002 12:03:08.103223 2563543 start.go:167] duration metric: libmachine.API.Create for "multinode-361100" took 7.926651492s
	I1002 12:03:08.103231 2563543 start.go:300] post-start starting for "multinode-361100-m02" (driver="docker")
	I1002 12:03:08.103241 2563543 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:03:08.103309 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:03:08.103355 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:08.123546 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:03:08.224312 2563543 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:03:08.229029 2563543 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 12:03:08.229054 2563543 command_runner.go:130] > NAME="Ubuntu"
	I1002 12:03:08.229062 2563543 command_runner.go:130] > VERSION_ID="22.04"
	I1002 12:03:08.229069 2563543 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 12:03:08.229075 2563543 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 12:03:08.229080 2563543 command_runner.go:130] > ID=ubuntu
	I1002 12:03:08.229084 2563543 command_runner.go:130] > ID_LIKE=debian
	I1002 12:03:08.229090 2563543 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 12:03:08.229096 2563543 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 12:03:08.229104 2563543 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 12:03:08.229113 2563543 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 12:03:08.229120 2563543 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 12:03:08.229164 2563543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:03:08.229196 2563543 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:03:08.229208 2563543 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:03:08.229221 2563543 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 12:03:08.229233 2563543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:03:08.229302 2563543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:03:08.229382 2563543 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:03:08.229394 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /etc/ssl/certs/24995982.pem
	I1002 12:03:08.229493 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:03:08.241011 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:03:08.272754 2563543 start.go:303] post-start completed in 169.505978ms
	I1002 12:03:08.273159 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100-m02
	I1002 12:03:08.291408 2563543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/config.json ...
	I1002 12:03:08.291691 2563543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:03:08.291732 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:08.312581 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:03:08.406750 2563543 command_runner.go:130] > 18%!
	(MISSING)I1002 12:03:08.407204 2563543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:03:08.413572 2563543 command_runner.go:130] > 159G
	I1002 12:03:08.413599 2563543 start.go:128] duration metric: createHost completed in 8.253156055s
	I1002 12:03:08.413608 2563543 start.go:83] releasing machines lock for "multinode-361100-m02", held for 8.253331694s
	I1002 12:03:08.413679 2563543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100-m02
	I1002 12:03:08.437064 2563543 out.go:177] * Found network options:
	I1002 12:03:08.439007 2563543 out.go:177]   - NO_PROXY=192.168.58.2
	W1002 12:03:08.441143 2563543 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 12:03:08.441196 2563543 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 12:03:08.441273 2563543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:03:08.441324 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:08.441598 2563543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:03:08.441679 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:03:08.471091 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:03:08.480036 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:03:08.722944 2563543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:03:08.723047 2563543 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 12:03:08.729486 2563543 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 12:03:08.729513 2563543 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1002 12:03:08.729522 2563543 command_runner.go:130] > Device: b6h/182d	Inode: 2868618     Links: 1
	I1002 12:03:08.729529 2563543 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:03:08.729536 2563543 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1002 12:03:08.729543 2563543 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1002 12:03:08.729549 2563543 command_runner.go:130] > Change: 2023-10-02 07:16:55.608351941 +0000
	I1002 12:03:08.729556 2563543 command_runner.go:130] >  Birth: 2023-10-02 07:16:55.608351941 +0000
	I1002 12:03:08.729648 2563543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:03:08.754291 2563543 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:03:08.754433 2563543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:03:08.796256 2563543 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1002 12:03:08.796353 2563543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 12:03:08.796377 2563543 start.go:469] detecting cgroup driver to use...
	I1002 12:03:08.796439 2563543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:03:08.796539 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:03:08.815945 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:03:08.830020 2563543 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:03:08.830127 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:03:08.847116 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:03:08.865713 2563543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:03:08.969564 2563543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:03:09.085607 2563543 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 12:03:09.085644 2563543 docker.go:213] disabling docker service ...
	I1002 12:03:09.085707 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:03:09.111246 2563543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:03:09.126689 2563543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:03:09.141527 2563543 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 12:03:09.236895 2563543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:03:09.251072 2563543 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 12:03:09.350875 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:03:09.365507 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:03:09.386765 2563543 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 12:03:09.388472 2563543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:03:09.388623 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:03:09.401365 2563543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:03:09.401437 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:03:09.413699 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:03:09.426074 2563543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:03:09.438483 2563543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:03:09.450188 2563543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:03:09.459679 2563543 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 12:03:09.460825 2563543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:03:09.471951 2563543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:03:09.578961 2563543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:03:09.705546 2563543 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:03:09.705668 2563543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:03:09.710680 2563543 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 12:03:09.710749 2563543 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 12:03:09.710773 2563543 command_runner.go:130] > Device: bfh/191d	Inode: 190         Links: 1
	I1002 12:03:09.710804 2563543 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:03:09.710835 2563543 command_runner.go:130] > Access: 2023-10-02 12:03:09.692433624 +0000
	I1002 12:03:09.710857 2563543 command_runner.go:130] > Modify: 2023-10-02 12:03:09.692433624 +0000
	I1002 12:03:09.710879 2563543 command_runner.go:130] > Change: 2023-10-02 12:03:09.692433624 +0000
	I1002 12:03:09.710905 2563543 command_runner.go:130] >  Birth: -
	I1002 12:03:09.711536 2563543 start.go:537] Will wait 60s for crictl version
	I1002 12:03:09.711631 2563543 ssh_runner.go:195] Run: which crictl
	I1002 12:03:09.716319 2563543 command_runner.go:130] > /usr/bin/crictl
	I1002 12:03:09.716913 2563543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:03:09.758417 2563543 command_runner.go:130] > Version:  0.1.0
	I1002 12:03:09.758528 2563543 command_runner.go:130] > RuntimeName:  cri-o
	I1002 12:03:09.758559 2563543 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1002 12:03:09.758578 2563543 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 12:03:09.761729 2563543 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 12:03:09.761857 2563543 ssh_runner.go:195] Run: crio --version
	I1002 12:03:09.808646 2563543 command_runner.go:130] > crio version 1.24.6
	I1002 12:03:09.808668 2563543 command_runner.go:130] > Version:          1.24.6
	I1002 12:03:09.808678 2563543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 12:03:09.808683 2563543 command_runner.go:130] > GitTreeState:     clean
	I1002 12:03:09.808690 2563543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 12:03:09.808704 2563543 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 12:03:09.808709 2563543 command_runner.go:130] > Compiler:         gc
	I1002 12:03:09.808715 2563543 command_runner.go:130] > Platform:         linux/arm64
	I1002 12:03:09.808721 2563543 command_runner.go:130] > Linkmode:         dynamic
	I1002 12:03:09.808730 2563543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 12:03:09.808739 2563543 command_runner.go:130] > SeccompEnabled:   true
	I1002 12:03:09.808744 2563543 command_runner.go:130] > AppArmorEnabled:  false
	I1002 12:03:09.811510 2563543 ssh_runner.go:195] Run: crio --version
	I1002 12:03:09.863562 2563543 command_runner.go:130] > crio version 1.24.6
	I1002 12:03:09.863582 2563543 command_runner.go:130] > Version:          1.24.6
	I1002 12:03:09.863592 2563543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 12:03:09.863597 2563543 command_runner.go:130] > GitTreeState:     clean
	I1002 12:03:09.863643 2563543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 12:03:09.863655 2563543 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 12:03:09.863660 2563543 command_runner.go:130] > Compiler:         gc
	I1002 12:03:09.863666 2563543 command_runner.go:130] > Platform:         linux/arm64
	I1002 12:03:09.863684 2563543 command_runner.go:130] > Linkmode:         dynamic
	I1002 12:03:09.863732 2563543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 12:03:09.863751 2563543 command_runner.go:130] > SeccompEnabled:   true
	I1002 12:03:09.863756 2563543 command_runner.go:130] > AppArmorEnabled:  false
	I1002 12:03:09.866848 2563543 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 12:03:09.868715 2563543 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 12:03:09.870893 2563543 cli_runner.go:164] Run: docker network inspect multinode-361100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:03:09.889420 2563543 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 12:03:09.894124 2563543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:03:09.907993 2563543 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100 for IP: 192.168.58.3
	I1002 12:03:09.908022 2563543 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:03:09.908165 2563543 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 12:03:09.908204 2563543 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 12:03:09.908216 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 12:03:09.908231 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 12:03:09.908244 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 12:03:09.908255 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 12:03:09.908312 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 12:03:09.908341 2563543 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 12:03:09.908350 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:03:09.908379 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:03:09.908403 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:03:09.908426 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 12:03:09.908470 2563543 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:03:09.908500 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> /usr/share/ca-certificates/24995982.pem
	I1002 12:03:09.908512 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:03:09.908596 2563543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem -> /usr/share/ca-certificates/2499598.pem
	I1002 12:03:09.908931 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:03:09.939941 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 12:03:09.971559 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:03:10.003033 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 12:03:10.041677 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 12:03:10.072682 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:03:10.105298 2563543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 12:03:10.137359 2563543 ssh_runner.go:195] Run: openssl version
	I1002 12:03:10.144542 2563543 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 12:03:10.144649 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 12:03:10.157290 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 12:03:10.162089 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:03:10.162377 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:03:10.162463 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 12:03:10.170973 2563543 command_runner.go:130] > 51391683
	I1002 12:03:10.171423 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 12:03:10.184621 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 12:03:10.197389 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 12:03:10.202428 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:03:10.202463 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:03:10.202516 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 12:03:10.211085 2563543 command_runner.go:130] > 3ec20f2e
	I1002 12:03:10.211543 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:03:10.224265 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:03:10.236786 2563543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:03:10.241865 2563543 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:03:10.241894 2563543 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:03:10.241946 2563543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:03:10.250661 2563543 command_runner.go:130] > b5213941
	I1002 12:03:10.251034 2563543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:03:10.265149 2563543 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:03:10.269836 2563543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:03:10.269920 2563543 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:03:10.270029 2563543 ssh_runner.go:195] Run: crio config
	I1002 12:03:10.327828 2563543 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 12:03:10.327855 2563543 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 12:03:10.327864 2563543 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 12:03:10.327869 2563543 command_runner.go:130] > #
	I1002 12:03:10.327878 2563543 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 12:03:10.327886 2563543 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 12:03:10.327894 2563543 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 12:03:10.327911 2563543 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 12:03:10.327921 2563543 command_runner.go:130] > # reload'.
	I1002 12:03:10.327929 2563543 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 12:03:10.327939 2563543 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 12:03:10.327949 2563543 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 12:03:10.327957 2563543 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 12:03:10.327964 2563543 command_runner.go:130] > [crio]
	I1002 12:03:10.327972 2563543 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 12:03:10.327978 2563543 command_runner.go:130] > # containers images, in this directory.
	I1002 12:03:10.327991 2563543 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 12:03:10.328001 2563543 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 12:03:10.328009 2563543 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1002 12:03:10.328019 2563543 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 12:03:10.328027 2563543 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 12:03:10.328036 2563543 command_runner.go:130] > # storage_driver = "vfs"
	I1002 12:03:10.328047 2563543 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 12:03:10.328058 2563543 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 12:03:10.328071 2563543 command_runner.go:130] > # storage_option = [
	I1002 12:03:10.328080 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.328089 2563543 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 12:03:10.328099 2563543 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 12:03:10.328108 2563543 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 12:03:10.328116 2563543 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 12:03:10.328126 2563543 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 12:03:10.328132 2563543 command_runner.go:130] > # always happen on a node reboot
	I1002 12:03:10.328144 2563543 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 12:03:10.328153 2563543 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 12:03:10.328163 2563543 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 12:03:10.328176 2563543 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 12:03:10.328188 2563543 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 12:03:10.328198 2563543 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 12:03:10.328211 2563543 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 12:03:10.328216 2563543 command_runner.go:130] > # internal_wipe = true
	I1002 12:03:10.328226 2563543 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 12:03:10.328235 2563543 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 12:03:10.328246 2563543 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 12:03:10.328253 2563543 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 12:03:10.328263 2563543 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 12:03:10.328270 2563543 command_runner.go:130] > [crio.api]
	I1002 12:03:10.328279 2563543 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 12:03:10.328291 2563543 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 12:03:10.328298 2563543 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 12:03:10.328306 2563543 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 12:03:10.328314 2563543 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 12:03:10.328323 2563543 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 12:03:10.328697 2563543 command_runner.go:130] > # stream_port = "0"
	I1002 12:03:10.328725 2563543 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 12:03:10.328739 2563543 command_runner.go:130] > # stream_enable_tls = false
	I1002 12:03:10.328754 2563543 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 12:03:10.328760 2563543 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 12:03:10.328768 2563543 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 12:03:10.328780 2563543 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 12:03:10.328785 2563543 command_runner.go:130] > # minutes.
	I1002 12:03:10.329064 2563543 command_runner.go:130] > # stream_tls_cert = ""
	I1002 12:03:10.329083 2563543 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 12:03:10.329103 2563543 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 12:03:10.329110 2563543 command_runner.go:130] > # stream_tls_key = ""
	I1002 12:03:10.329118 2563543 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 12:03:10.329130 2563543 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 12:03:10.329139 2563543 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 12:03:10.329149 2563543 command_runner.go:130] > # stream_tls_ca = ""
	I1002 12:03:10.329158 2563543 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 12:03:10.329389 2563543 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 12:03:10.329411 2563543 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 12:03:10.329631 2563543 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 12:03:10.329656 2563543 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 12:03:10.329674 2563543 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 12:03:10.329684 2563543 command_runner.go:130] > [crio.runtime]
	I1002 12:03:10.329691 2563543 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 12:03:10.329698 2563543 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 12:03:10.329704 2563543 command_runner.go:130] > # "nofile=1024:2048"
	I1002 12:03:10.329718 2563543 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 12:03:10.329723 2563543 command_runner.go:130] > # default_ulimits = [
	I1002 12:03:10.330023 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.330042 2563543 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 12:03:10.330058 2563543 command_runner.go:130] > # no_pivot = false
	I1002 12:03:10.330066 2563543 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 12:03:10.330080 2563543 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 12:03:10.330086 2563543 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 12:03:10.330096 2563543 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 12:03:10.330103 2563543 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 12:03:10.330114 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 12:03:10.330119 2563543 command_runner.go:130] > # conmon = ""
	I1002 12:03:10.330135 2563543 command_runner.go:130] > # Cgroup setting for conmon
	I1002 12:03:10.330150 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 12:03:10.330155 2563543 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 12:03:10.330166 2563543 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 12:03:10.330177 2563543 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 12:03:10.330186 2563543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 12:03:10.330195 2563543 command_runner.go:130] > # conmon_env = [
	I1002 12:03:10.330207 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.330218 2563543 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 12:03:10.330225 2563543 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 12:03:10.330236 2563543 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 12:03:10.330518 2563543 command_runner.go:130] > # default_env = [
	I1002 12:03:10.330532 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.330550 2563543 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 12:03:10.330559 2563543 command_runner.go:130] > # selinux = false
	I1002 12:03:10.330571 2563543 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 12:03:10.330579 2563543 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 12:03:10.330588 2563543 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 12:03:10.330594 2563543 command_runner.go:130] > # seccomp_profile = ""
	I1002 12:03:10.330603 2563543 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 12:03:10.330610 2563543 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 12:03:10.330626 2563543 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 12:03:10.330635 2563543 command_runner.go:130] > # which might increase security.
	I1002 12:03:10.331171 2563543 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1002 12:03:10.331191 2563543 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 12:03:10.331210 2563543 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 12:03:10.331226 2563543 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 12:03:10.331234 2563543 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 12:03:10.331244 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:03:10.331249 2563543 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 12:03:10.331259 2563543 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 12:03:10.331265 2563543 command_runner.go:130] > # the cgroup blockio controller.
	I1002 12:03:10.331279 2563543 command_runner.go:130] > # blockio_config_file = ""
	I1002 12:03:10.331291 2563543 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 12:03:10.331296 2563543 command_runner.go:130] > # irqbalance daemon.
	I1002 12:03:10.331307 2563543 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 12:03:10.331319 2563543 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 12:03:10.331326 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:03:10.331334 2563543 command_runner.go:130] > # rdt_config_file = ""
	I1002 12:03:10.331341 2563543 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 12:03:10.331355 2563543 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 12:03:10.331366 2563543 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 12:03:10.331372 2563543 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 12:03:10.331382 2563543 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 12:03:10.331389 2563543 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 12:03:10.331394 2563543 command_runner.go:130] > # will be added.
	I1002 12:03:10.331405 2563543 command_runner.go:130] > # default_capabilities = [
	I1002 12:03:10.331887 2563543 command_runner.go:130] > # 	"CHOWN",
	I1002 12:03:10.331903 2563543 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 12:03:10.331908 2563543 command_runner.go:130] > # 	"FSETID",
	I1002 12:03:10.331924 2563543 command_runner.go:130] > # 	"FOWNER",
	I1002 12:03:10.331934 2563543 command_runner.go:130] > # 	"SETGID",
	I1002 12:03:10.331939 2563543 command_runner.go:130] > # 	"SETUID",
	I1002 12:03:10.331945 2563543 command_runner.go:130] > # 	"SETPCAP",
	I1002 12:03:10.332383 2563543 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 12:03:10.332834 2563543 command_runner.go:130] > # 	"KILL",
	I1002 12:03:10.333275 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.333295 2563543 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 12:03:10.333304 2563543 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 12:03:10.334703 2563543 command_runner.go:130] > # add_inheritable_capabilities = true
	I1002 12:03:10.334723 2563543 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 12:03:10.334731 2563543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 12:03:10.334736 2563543 command_runner.go:130] > # default_sysctls = [
	I1002 12:03:10.334745 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.334751 2563543 command_runner.go:130] > # List of devices on the host that a
	I1002 12:03:10.334767 2563543 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 12:03:10.334776 2563543 command_runner.go:130] > # allowed_devices = [
	I1002 12:03:10.334781 2563543 command_runner.go:130] > # 	"/dev/fuse",
	I1002 12:03:10.334785 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.334793 2563543 command_runner.go:130] > # List of additional devices. specified as
	I1002 12:03:10.334814 2563543 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 12:03:10.334825 2563543 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 12:03:10.334841 2563543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 12:03:10.334850 2563543 command_runner.go:130] > # additional_devices = [
	I1002 12:03:10.334854 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.334861 2563543 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 12:03:10.334869 2563543 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 12:03:10.334874 2563543 command_runner.go:130] > # 	"/etc/cdi",
	I1002 12:03:10.334883 2563543 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 12:03:10.334887 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.334895 2563543 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 12:03:10.334905 2563543 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 12:03:10.334916 2563543 command_runner.go:130] > # Defaults to false.
	I1002 12:03:10.334925 2563543 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 12:03:10.334933 2563543 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 12:03:10.334944 2563543 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 12:03:10.334949 2563543 command_runner.go:130] > # hooks_dir = [
	I1002 12:03:10.334958 2563543 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 12:03:10.334962 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.334970 2563543 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 12:03:10.334981 2563543 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 12:03:10.334996 2563543 command_runner.go:130] > # its default mounts from the following two files:
	I1002 12:03:10.335003 2563543 command_runner.go:130] > #
	I1002 12:03:10.335013 2563543 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 12:03:10.335025 2563543 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 12:03:10.335032 2563543 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 12:03:10.335039 2563543 command_runner.go:130] > #
	I1002 12:03:10.335046 2563543 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 12:03:10.335063 2563543 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 12:03:10.335075 2563543 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 12:03:10.335082 2563543 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 12:03:10.335086 2563543 command_runner.go:130] > #
	I1002 12:03:10.335094 2563543 command_runner.go:130] > # default_mounts_file = ""
	I1002 12:03:10.335103 2563543 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 12:03:10.335111 2563543 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 12:03:10.335119 2563543 command_runner.go:130] > # pids_limit = 0
	I1002 12:03:10.335127 2563543 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 12:03:10.335145 2563543 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 12:03:10.335157 2563543 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 12:03:10.335167 2563543 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 12:03:10.335175 2563543 command_runner.go:130] > # log_size_max = -1
	I1002 12:03:10.335183 2563543 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 12:03:10.335191 2563543 command_runner.go:130] > # log_to_journald = false
	I1002 12:03:10.335201 2563543 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 12:03:10.335213 2563543 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 12:03:10.335223 2563543 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 12:03:10.335229 2563543 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 12:03:10.335238 2563543 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 12:03:10.335244 2563543 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 12:03:10.335253 2563543 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 12:03:10.335259 2563543 command_runner.go:130] > # read_only = false
	I1002 12:03:10.335266 2563543 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 12:03:10.335277 2563543 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 12:03:10.335288 2563543 command_runner.go:130] > # live configuration reload.
	I1002 12:03:10.335297 2563543 command_runner.go:130] > # log_level = "info"
	I1002 12:03:10.335304 2563543 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 12:03:10.335313 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:03:10.335318 2563543 command_runner.go:130] > # log_filter = ""
	I1002 12:03:10.335329 2563543 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 12:03:10.335340 2563543 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 12:03:10.335345 2563543 command_runner.go:130] > # separated by comma.
	I1002 12:03:10.335353 2563543 command_runner.go:130] > # uid_mappings = ""
	I1002 12:03:10.335366 2563543 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 12:03:10.335374 2563543 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 12:03:10.335384 2563543 command_runner.go:130] > # separated by comma.
	I1002 12:03:10.335389 2563543 command_runner.go:130] > # gid_mappings = ""
	I1002 12:03:10.335397 2563543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 12:03:10.335411 2563543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 12:03:10.335419 2563543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 12:03:10.335427 2563543 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 12:03:10.335440 2563543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 12:03:10.335473 2563543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 12:03:10.335485 2563543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 12:03:10.335492 2563543 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 12:03:10.335510 2563543 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 12:03:10.335521 2563543 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 12:03:10.335529 2563543 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 12:03:10.335538 2563543 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 12:03:10.335545 2563543 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 12:03:10.335552 2563543 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 12:03:10.335561 2563543 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 12:03:10.335567 2563543 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 12:03:10.335574 2563543 command_runner.go:130] > # drop_infra_ctr = true
	I1002 12:03:10.335589 2563543 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 12:03:10.335599 2563543 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 12:03:10.335608 2563543 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 12:03:10.335617 2563543 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 12:03:10.335624 2563543 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 12:03:10.335631 2563543 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 12:03:10.335638 2563543 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 12:03:10.335647 2563543 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 12:03:10.335660 2563543 command_runner.go:130] > # pinns_path = ""
	I1002 12:03:10.335669 2563543 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 12:03:10.335680 2563543 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 12:03:10.335688 2563543 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 12:03:10.335697 2563543 command_runner.go:130] > # default_runtime = "runc"
	I1002 12:03:10.335703 2563543 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 12:03:10.335714 2563543 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 12:03:10.335726 2563543 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 12:03:10.335741 2563543 command_runner.go:130] > # creation as a file is not desired either.
	I1002 12:03:10.335752 2563543 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 12:03:10.335762 2563543 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 12:03:10.335768 2563543 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 12:03:10.335775 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.335783 2563543 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 12:03:10.335793 2563543 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 12:03:10.335801 2563543 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 12:03:10.335817 2563543 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 12:03:10.335824 2563543 command_runner.go:130] > #
	I1002 12:03:10.335830 2563543 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 12:03:10.335837 2563543 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 12:03:10.335844 2563543 command_runner.go:130] > #  runtime_type = "oci"
	I1002 12:03:10.335852 2563543 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 12:03:10.335860 2563543 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 12:03:10.335869 2563543 command_runner.go:130] > #  allowed_annotations = []
	I1002 12:03:10.335873 2563543 command_runner.go:130] > # Where:
	I1002 12:03:10.335887 2563543 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 12:03:10.335901 2563543 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 12:03:10.335909 2563543 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 12:03:10.335922 2563543 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 12:03:10.335927 2563543 command_runner.go:130] > #   in $PATH.
	I1002 12:03:10.335937 2563543 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 12:03:10.335945 2563543 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 12:03:10.335953 2563543 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 12:03:10.335968 2563543 command_runner.go:130] > #   state.
	I1002 12:03:10.335976 2563543 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 12:03:10.335986 2563543 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 12:03:10.335994 2563543 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 12:03:10.336005 2563543 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 12:03:10.336013 2563543 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 12:03:10.336024 2563543 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 12:03:10.336038 2563543 command_runner.go:130] > #   The currently recognized values are:
	I1002 12:03:10.336050 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 12:03:10.336059 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 12:03:10.336077 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 12:03:10.336085 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 12:03:10.336097 2563543 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 12:03:10.336111 2563543 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 12:03:10.336143 2563543 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 12:03:10.336156 2563543 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 12:03:10.336163 2563543 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 12:03:10.336171 2563543 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 12:03:10.336185 2563543 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1002 12:03:10.336193 2563543 command_runner.go:130] > runtime_type = "oci"
	I1002 12:03:10.336199 2563543 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 12:03:10.336205 2563543 command_runner.go:130] > runtime_config_path = ""
	I1002 12:03:10.336213 2563543 command_runner.go:130] > monitor_path = ""
	I1002 12:03:10.336219 2563543 command_runner.go:130] > monitor_cgroup = ""
	I1002 12:03:10.336227 2563543 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 12:03:10.336243 2563543 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 12:03:10.336259 2563543 command_runner.go:130] > # running containers
	I1002 12:03:10.336267 2563543 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 12:03:10.336275 2563543 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 12:03:10.336286 2563543 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 12:03:10.336294 2563543 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 12:03:10.336303 2563543 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 12:03:10.336309 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 12:03:10.336317 2563543 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 12:03:10.336323 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 12:03:10.336337 2563543 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 12:03:10.336346 2563543 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 12:03:10.336355 2563543 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 12:03:10.336364 2563543 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 12:03:10.336372 2563543 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 12:03:10.336382 2563543 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 12:03:10.336395 2563543 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 12:03:10.336409 2563543 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 12:03:10.336424 2563543 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 12:03:10.336437 2563543 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 12:03:10.336444 2563543 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 12:03:10.336456 2563543 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 12:03:10.336461 2563543 command_runner.go:130] > # Example:
	I1002 12:03:10.336467 2563543 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 12:03:10.336484 2563543 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 12:03:10.336491 2563543 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 12:03:10.336502 2563543 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 12:03:10.336507 2563543 command_runner.go:130] > # cpuset = 0
	I1002 12:03:10.336512 2563543 command_runner.go:130] > # cpushares = "0-1"
	I1002 12:03:10.336538 2563543 command_runner.go:130] > # Where:
	I1002 12:03:10.336545 2563543 command_runner.go:130] > # The workload name is workload-type.
	I1002 12:03:10.336553 2563543 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 12:03:10.336563 2563543 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 12:03:10.336573 2563543 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 12:03:10.336587 2563543 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 12:03:10.336595 2563543 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 12:03:10.336601 2563543 command_runner.go:130] > # 
	I1002 12:03:10.336616 2563543 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 12:03:10.336627 2563543 command_runner.go:130] > #
	I1002 12:03:10.336635 2563543 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 12:03:10.336643 2563543 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 12:03:10.336651 2563543 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 12:03:10.336662 2563543 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 12:03:10.336670 2563543 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 12:03:10.336750 2563543 command_runner.go:130] > [crio.image]
	I1002 12:03:10.336768 2563543 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 12:03:10.336775 2563543 command_runner.go:130] > # default_transport = "docker://"
	I1002 12:03:10.336783 2563543 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 12:03:10.336791 2563543 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 12:03:10.336799 2563543 command_runner.go:130] > # global_auth_file = ""
	I1002 12:03:10.336806 2563543 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 12:03:10.336824 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:03:10.336834 2563543 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 12:03:10.336842 2563543 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 12:03:10.336853 2563543 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 12:03:10.336859 2563543 command_runner.go:130] > # This option supports live configuration reload.
	I1002 12:03:10.336865 2563543 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 12:03:10.336872 2563543 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 12:03:10.336884 2563543 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 12:03:10.336899 2563543 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 12:03:10.336911 2563543 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 12:03:10.336980 2563543 command_runner.go:130] > # pause_command = "/pause"
	I1002 12:03:10.336995 2563543 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 12:03:10.337003 2563543 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 12:03:10.337011 2563543 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 12:03:10.337023 2563543 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 12:03:10.337030 2563543 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 12:03:10.337035 2563543 command_runner.go:130] > # signature_policy = ""
	I1002 12:03:10.337045 2563543 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 12:03:10.337063 2563543 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 12:03:10.337069 2563543 command_runner.go:130] > # changing them here.
	I1002 12:03:10.337074 2563543 command_runner.go:130] > # insecure_registries = [
	I1002 12:03:10.337081 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.337089 2563543 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 12:03:10.337098 2563543 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 12:03:10.337104 2563543 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 12:03:10.337116 2563543 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 12:03:10.337129 2563543 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 12:03:10.337137 2563543 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 12:03:10.337145 2563543 command_runner.go:130] > # CNI plugins.
	I1002 12:03:10.337150 2563543 command_runner.go:130] > [crio.network]
	I1002 12:03:10.337158 2563543 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 12:03:10.337168 2563543 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 12:03:10.337174 2563543 command_runner.go:130] > # cni_default_network = ""
	I1002 12:03:10.337184 2563543 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 12:03:10.337190 2563543 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 12:03:10.337207 2563543 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 12:03:10.337212 2563543 command_runner.go:130] > # plugin_dirs = [
	I1002 12:03:10.337218 2563543 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 12:03:10.337225 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.337232 2563543 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 12:03:10.337240 2563543 command_runner.go:130] > [crio.metrics]
	I1002 12:03:10.337247 2563543 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 12:03:10.337255 2563543 command_runner.go:130] > # enable_metrics = false
	I1002 12:03:10.337261 2563543 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 12:03:10.337277 2563543 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 12:03:10.337288 2563543 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 12:03:10.337296 2563543 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 12:03:10.337304 2563543 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 12:03:10.337311 2563543 command_runner.go:130] > # metrics_collectors = [
	I1002 12:03:10.337316 2563543 command_runner.go:130] > # 	"operations",
	I1002 12:03:10.337323 2563543 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 12:03:10.337333 2563543 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 12:03:10.337338 2563543 command_runner.go:130] > # 	"operations_errors",
	I1002 12:03:10.337353 2563543 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 12:03:10.337362 2563543 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 12:03:10.337367 2563543 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 12:03:10.337376 2563543 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 12:03:10.337381 2563543 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 12:03:10.337386 2563543 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 12:03:10.337392 2563543 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 12:03:10.337401 2563543 command_runner.go:130] > # 	"containers_oom_total",
	I1002 12:03:10.337406 2563543 command_runner.go:130] > # 	"containers_oom",
	I1002 12:03:10.337412 2563543 command_runner.go:130] > # 	"processes_defunct",
	I1002 12:03:10.337419 2563543 command_runner.go:130] > # 	"operations_total",
	I1002 12:03:10.337433 2563543 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 12:03:10.337442 2563543 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 12:03:10.337447 2563543 command_runner.go:130] > # 	"operations_errors_total",
	I1002 12:03:10.337455 2563543 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 12:03:10.337463 2563543 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 12:03:10.337475 2563543 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 12:03:10.337481 2563543 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 12:03:10.337489 2563543 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 12:03:10.337500 2563543 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 12:03:10.337508 2563543 command_runner.go:130] > # ]
	I1002 12:03:10.337515 2563543 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 12:03:10.337522 2563543 command_runner.go:130] > # metrics_port = 9090
	I1002 12:03:10.337528 2563543 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 12:03:10.337534 2563543 command_runner.go:130] > # metrics_socket = ""
	I1002 12:03:10.337540 2563543 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 12:03:10.337548 2563543 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 12:03:10.337559 2563543 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 12:03:10.337565 2563543 command_runner.go:130] > # certificate on any modification event.
	I1002 12:03:10.337579 2563543 command_runner.go:130] > # metrics_cert = ""
	I1002 12:03:10.337589 2563543 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 12:03:10.337596 2563543 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 12:03:10.337604 2563543 command_runner.go:130] > # metrics_key = ""
	I1002 12:03:10.337611 2563543 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 12:03:10.337618 2563543 command_runner.go:130] > [crio.tracing]
	I1002 12:03:10.337625 2563543 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 12:03:10.337631 2563543 command_runner.go:130] > # enable_tracing = false
	I1002 12:03:10.337639 2563543 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 12:03:10.337655 2563543 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 12:03:10.337662 2563543 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 12:03:10.337671 2563543 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 12:03:10.337681 2563543 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 12:03:10.337690 2563543 command_runner.go:130] > [crio.stats]
	I1002 12:03:10.337697 2563543 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 12:03:10.337704 2563543 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 12:03:10.337709 2563543 command_runner.go:130] > # stats_collection_period = 0
	I1002 12:03:10.340211 2563543 command_runner.go:130] ! time="2023-10-02 12:03:10.325018373Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1002 12:03:10.340238 2563543 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 12:03:10.340652 2563543 cni.go:84] Creating CNI manager for ""
	I1002 12:03:10.340677 2563543 cni.go:136] 2 nodes found, recommending kindnet
	I1002 12:03:10.340687 2563543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 12:03:10.340707 2563543 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-361100 NodeName:multinode-361100-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:03:10.340835 2563543 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-361100-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:03:10.340891 2563543 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-361100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:03:10.340960 2563543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:03:10.350843 2563543 command_runner.go:130] > kubeadm
	I1002 12:03:10.350864 2563543 command_runner.go:130] > kubectl
	I1002 12:03:10.350870 2563543 command_runner.go:130] > kubelet
	I1002 12:03:10.352134 2563543 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:03:10.352200 2563543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 12:03:10.363142 2563543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1002 12:03:10.386391 2563543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:03:10.408759 2563543 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 12:03:10.413193 2563543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:03:10.427314 2563543 host.go:66] Checking if "multinode-361100" exists ...
	I1002 12:03:10.427628 2563543 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:03:10.427594 2563543 start.go:304] JoinCluster: &{Name:multinode-361100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-361100 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:03:10.427682 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 12:03:10.427730 2563543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:03:10.447019 2563543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:03:10.617187 2563543 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token z3qxtd.xuzie28485kk06m5 --discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 
	I1002 12:03:10.621343 2563543 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 12:03:10.621385 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z3qxtd.xuzie28485kk06m5 --discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-361100-m02"
	I1002 12:03:10.676663 2563543 command_runner.go:130] ! W1002 12:03:10.676222    1030 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1002 12:03:10.714770 2563543 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 12:03:10.802972 2563543 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:03:13.483299 2563543 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 12:03:13.483367 2563543 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 12:03:13.483389 2563543 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 12:03:13.483403 2563543 command_runner.go:130] > OS: Linux
	I1002 12:03:13.483410 2563543 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 12:03:13.483417 2563543 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 12:03:13.483423 2563543 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 12:03:13.483430 2563543 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 12:03:13.483438 2563543 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 12:03:13.483445 2563543 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 12:03:13.483452 2563543 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 12:03:13.483471 2563543 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 12:03:13.483481 2563543 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 12:03:13.483487 2563543 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 12:03:13.483500 2563543 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 12:03:13.483511 2563543 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:03:13.483521 2563543 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:03:13.483530 2563543 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 12:03:13.483541 2563543 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 12:03:13.483547 2563543 command_runner.go:130] > This node has joined the cluster:
	I1002 12:03:13.483558 2563543 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 12:03:13.483565 2563543 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 12:03:13.483576 2563543 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 12:03:13.483604 2563543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z3qxtd.xuzie28485kk06m5 --discovery-token-ca-cert-hash sha256:bafa40ad46197010727e96472103cc853e44f24d916d26f9ef93bdc8a951c012 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-361100-m02": (2.862193572s)
	I1002 12:03:13.483627 2563543 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 12:03:13.756435 2563543 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1002 12:03:13.756461 2563543 start.go:306] JoinCluster complete in 3.328866406s
	I1002 12:03:13.756473 2563543 cni.go:84] Creating CNI manager for ""
	I1002 12:03:13.756478 2563543 cni.go:136] 2 nodes found, recommending kindnet
	I1002 12:03:13.756557 2563543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 12:03:13.762605 2563543 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 12:03:13.762629 2563543 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 12:03:13.762637 2563543 command_runner.go:130] > Device: 36h/54d	Inode: 2872313     Links: 1
	I1002 12:03:13.762645 2563543 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 12:03:13.762653 2563543 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 12:03:13.762659 2563543 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 12:03:13.762665 2563543 command_runner.go:130] > Change: 2023-10-02 07:16:56.284353094 +0000
	I1002 12:03:13.762671 2563543 command_runner.go:130] >  Birth: 2023-10-02 07:16:56.244353026 +0000
	I1002 12:03:13.762717 2563543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 12:03:13.762726 2563543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 12:03:13.805556 2563543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 12:03:14.167286 2563543 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 12:03:14.184830 2563543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 12:03:14.199431 2563543 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 12:03:14.235357 2563543 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 12:03:14.244413 2563543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:03:14.244869 2563543 kapi.go:59] client config for multinode-361100: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 12:03:14.245363 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 12:03:14.245385 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:14.245431 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:14.245446 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:14.249392 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:03:14.249418 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:14.249427 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:14 GMT
	I1002 12:03:14.249468 2563543 round_trippers.go:580]     Audit-Id: ca561c88-5605-4a9e-997f-b898435b49c3
	I1002 12:03:14.249492 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:14.249508 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:14.249517 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:14.249527 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:14.249576 2563543 round_trippers.go:580]     Content-Length: 291
	I1002 12:03:14.249938 2563543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf3eab07-a74c-49e3-9e4d-6831eea2cf38","resourceVersion":"422","creationTimestamp":"2023-10-02T12:02:12Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 12:03:14.250161 2563543 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-361100" context rescaled to 1 replicas
	I1002 12:03:14.250223 2563543 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 12:03:14.253850 2563543 out.go:177] * Verifying Kubernetes components...
	I1002 12:03:14.255585 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:03:14.289274 2563543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:03:14.289643 2563543 kapi.go:59] client config for multinode-361100: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/multinode-361100/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 12:03:14.289979 2563543 node_ready.go:35] waiting up to 6m0s for node "multinode-361100-m02" to be "Ready" ...
	I1002 12:03:14.290080 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:14.290102 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:14.290131 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:14.290151 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:14.297855 2563543 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1002 12:03:14.297886 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:14.297894 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:14.297901 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:14.297907 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:14 GMT
	I1002 12:03:14.297917 2563543 round_trippers.go:580]     Audit-Id: 2b6f2f34-1108-4e53-93a0-7cee980ec69c
	I1002 12:03:14.297945 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:14.297959 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:14.298635 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100-m02","uid":"18505935-3298-4ddf-87a5-e1cc031258d4","resourceVersion":"457","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 12:03:14.299255 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:14.299277 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:14.299291 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:14.299326 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:14.303280 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:03:14.303309 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:14.303317 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:14.303328 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:14.303365 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:14 GMT
	I1002 12:03:14.303382 2563543 round_trippers.go:580]     Audit-Id: 49ca9277-146a-4f82-8ff9-7b8f0732beab
	I1002 12:03:14.303389 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:14.303399 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:14.304049 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100-m02","uid":"18505935-3298-4ddf-87a5-e1cc031258d4","resourceVersion":"457","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 12:03:14.805376 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:14.805397 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:14.805407 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:14.805415 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:14.807897 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:14.807922 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:14.807931 2563543 round_trippers.go:580]     Audit-Id: f8dfabb4-61b9-4d88-99cd-7d55ba99325d
	I1002 12:03:14.807938 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:14.807944 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:14.807950 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:14.807956 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:14.807963 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:14 GMT
	I1002 12:03:14.808290 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100-m02","uid":"18505935-3298-4ddf-87a5-e1cc031258d4","resourceVersion":"457","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 12:03:15.305545 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:15.305568 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.305578 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.305587 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.308011 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.308121 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.308141 2563543 round_trippers.go:580]     Audit-Id: f7287842-c04f-4e77-8f05-1d3fa3afd04d
	I1002 12:03:15.308149 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.308156 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.308162 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.308168 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.308175 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.308332 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100-m02","uid":"18505935-3298-4ddf-87a5-e1cc031258d4","resourceVersion":"478","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1002 12:03:15.308757 2563543 node_ready.go:49] node "multinode-361100-m02" has status "Ready":"True"
	I1002 12:03:15.308800 2563543 node_ready.go:38] duration metric: took 1.018785584s waiting for node "multinode-361100-m02" to be "Ready" ...
	I1002 12:03:15.308812 2563543 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:03:15.308883 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 12:03:15.308893 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.308902 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.308909 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.312656 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:03:15.312677 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.312687 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.312693 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.312701 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.312707 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.312728 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.312740 2563543 round_trippers.go:580]     Audit-Id: 93cd1dc0-a574-4862-89d1-2e601b81cf1c
	I1002 12:03:15.313426 2563543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"478"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"418","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1002 12:03:15.316421 2563543 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.316547 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t8gwn
	I1002 12:03:15.316562 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.316571 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.316578 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.319258 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.319284 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.319293 2563543 round_trippers.go:580]     Audit-Id: c4813296-32a9-4e2d-82d8-54e931455aad
	I1002 12:03:15.319301 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.319307 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.319314 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.319321 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.319331 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.319740 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t8gwn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43f1fa61-5afd-4b63-abf4-f27325b4e897","resourceVersion":"418","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6088028-e1c7-4914-b9bd-030d03ef63a9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6088028-e1c7-4914-b9bd-030d03ef63a9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1002 12:03:15.320350 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.320369 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.320379 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.320386 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.325639 2563543 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 12:03:15.325673 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.325683 2563543 round_trippers.go:580]     Audit-Id: 9d57f4a0-c3a1-46c1-86a7-804456c42eda
	I1002 12:03:15.325691 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.325698 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.325705 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.325716 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.325726 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.325860 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:15.326293 2563543 pod_ready.go:92] pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:15.326313 2563543 pod_ready.go:81] duration metric: took 9.858826ms waiting for pod "coredns-5dd5756b68-t8gwn" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.326326 2563543 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.326397 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-361100
	I1002 12:03:15.326408 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.326417 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.326424 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.328984 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.329014 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.329024 2563543 round_trippers.go:580]     Audit-Id: 873f05f5-08c7-467c-94e8-bc26f26d589f
	I1002 12:03:15.329031 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.329037 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.329045 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.329054 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.329065 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.329164 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-361100","namespace":"kube-system","uid":"1591bfe3-2d80-4139-90be-0848d69c2065","resourceVersion":"390","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f946def84b54153bbd407a93c6520aa6","kubernetes.io/config.mirror":"f946def84b54153bbd407a93c6520aa6","kubernetes.io/config.seen":"2023-10-02T12:02:12.749628660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1002 12:03:15.329644 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.329661 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.329671 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.329678 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.332142 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.332167 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.332176 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.332184 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.332190 2563543 round_trippers.go:580]     Audit-Id: 08614130-ead2-4d77-b88c-4f441742382b
	I1002 12:03:15.332196 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.332202 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.332208 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.332430 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:15.332883 2563543 pod_ready.go:92] pod "etcd-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:15.332906 2563543 pod_ready.go:81] duration metric: took 6.56781ms waiting for pod "etcd-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.332924 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.333006 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-361100
	I1002 12:03:15.333017 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.333025 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.333032 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.335590 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.335616 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.335625 2563543 round_trippers.go:580]     Audit-Id: d56ce64d-42ef-463a-ba40-4f1021e29f62
	I1002 12:03:15.335631 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.335638 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.335644 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.335650 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.335657 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.336009 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-361100","namespace":"kube-system","uid":"2e175694-616d-4084-8747-9c93a50196fe","resourceVersion":"391","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e8c5df593d111e0d872f6b8579c917b6","kubernetes.io/config.mirror":"e8c5df593d111e0d872f6b8579c917b6","kubernetes.io/config.seen":"2023-10-02T12:02:12.749634961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1002 12:03:15.336638 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.336657 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.336667 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.336675 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.339089 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.339113 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.339123 2563543 round_trippers.go:580]     Audit-Id: 02b2b91b-8151-4ae8-a12f-90294334e019
	I1002 12:03:15.339131 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.339139 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.339145 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.339156 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.339162 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.339403 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:15.339800 2563543 pod_ready.go:92] pod "kube-apiserver-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:15.339820 2563543 pod_ready.go:81] duration metric: took 6.887794ms waiting for pod "kube-apiserver-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.339832 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.339917 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-361100
	I1002 12:03:15.339929 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.339938 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.339945 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.342585 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.342654 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.342690 2563543 round_trippers.go:580]     Audit-Id: f98f6604-c971-4500-ba43-32f32a300fba
	I1002 12:03:15.342706 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.342726 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.342741 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.342748 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.342754 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.342971 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-361100","namespace":"kube-system","uid":"6333c350-8aae-41f5-b761-b1c0e8bb58c8","resourceVersion":"392","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"033d17b0dc058901bd6cd65357fc9f2b","kubernetes.io/config.mirror":"033d17b0dc058901bd6cd65357fc9f2b","kubernetes.io/config.seen":"2023-10-02T12:02:12.749636520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1002 12:03:15.343529 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.343546 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.343556 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.343564 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.346025 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.346049 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.346058 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.346065 2563543 round_trippers.go:580]     Audit-Id: 93d0c525-911f-4927-979d-ff776c2d1cf9
	I1002 12:03:15.346072 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.346080 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.346086 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.346092 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.346283 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:15.346689 2563543 pod_ready.go:92] pod "kube-controller-manager-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:15.346700 2563543 pod_ready.go:81] duration metric: took 6.860874ms waiting for pod "kube-controller-manager-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.346713 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gfcj" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.506082 2563543 request.go:629] Waited for 159.301684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gfcj
	I1002 12:03:15.506164 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gfcj
	I1002 12:03:15.506171 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.506179 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.506186 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.509028 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.509101 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.509124 2563543 round_trippers.go:580]     Audit-Id: 216f634e-94c5-421a-9822-774acbf23180
	I1002 12:03:15.509136 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.509143 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.509149 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.509155 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.509171 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.509327 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gfcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"356394df-71a5-4114-99de-9d594ec624ca","resourceVersion":"383","creationTimestamp":"2023-10-02T12:02:25Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"191a9eb9-84d3-454f-b72a-9b074e5abff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"191a9eb9-84d3-454f-b72a-9b074e5abff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1002 12:03:15.706218 2563543 request.go:629] Waited for 196.366964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.706305 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:15.706354 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.706387 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.706399 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.710090 2563543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 12:03:15.710120 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.710132 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.710140 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.710146 2563543 round_trippers.go:580]     Audit-Id: b18a9658-f945-43f0-b070-b8340791da03
	I1002 12:03:15.710162 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.710172 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.710187 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.710343 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:15.710809 2563543 pod_ready.go:92] pod "kube-proxy-6gfcj" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:15.710829 2563543 pod_ready.go:81] duration metric: took 364.108994ms waiting for pod "kube-proxy-6gfcj" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.710842 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ntgk4" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:15.906277 2563543 request.go:629] Waited for 195.349643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntgk4
	I1002 12:03:15.906348 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntgk4
	I1002 12:03:15.906357 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:15.906367 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:15.906377 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:15.909319 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:15.909344 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:15.909353 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:15.909360 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:15.909366 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:15.909372 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:15.909379 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:15 GMT
	I1002 12:03:15.909385 2563543 round_trippers.go:580]     Audit-Id: 74679ae9-b503-48dc-bd3d-b87db97afbb5
	I1002 12:03:15.910055 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ntgk4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5d8fd3a0-4219-4782-be12-0fdb03b4c364","resourceVersion":"472","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"191a9eb9-84d3-454f-b72a-9b074e5abff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"191a9eb9-84d3-454f-b72a-9b074e5abff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 12:03:16.105942 2563543 request.go:629] Waited for 195.361146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:16.106038 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100-m02
	I1002 12:03:16.106050 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:16.106060 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:16.106067 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:16.108710 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:16.108855 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:16.108869 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:16.108884 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:16 GMT
	I1002 12:03:16.108905 2563543 round_trippers.go:580]     Audit-Id: 6a4bdfb9-2db4-4f7b-9fbd-c3d23aa71db1
	I1002 12:03:16.108917 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:16.108924 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:16.108934 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:16.109087 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100-m02","uid":"18505935-3298-4ddf-87a5-e1cc031258d4","resourceVersion":"478","creationTimestamp":"2023-10-02T12:03:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:03:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1002 12:03:16.109510 2563543 pod_ready.go:92] pod "kube-proxy-ntgk4" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:16.109529 2563543 pod_ready.go:81] duration metric: took 398.680348ms waiting for pod "kube-proxy-ntgk4" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:16.109553 2563543 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:16.305905 2563543 request.go:629] Waited for 196.275206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-361100
	I1002 12:03:16.305995 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-361100
	I1002 12:03:16.306012 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:16.306022 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:16.306029 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:16.311683 2563543 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 12:03:16.311726 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:16.311736 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:16 GMT
	I1002 12:03:16.311743 2563543 round_trippers.go:580]     Audit-Id: 316efbf9-03ce-47ff-bfe8-ae72528ceb2f
	I1002 12:03:16.311749 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:16.311755 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:16.311763 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:16.311769 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:16.312489 2563543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-361100","namespace":"kube-system","uid":"bc8f5f7c-fd2d-4ec5-b3b9-ecd4abfb06f7","resourceVersion":"389","creationTimestamp":"2023-10-02T12:02:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4b7655de39be7fe33bd9544841473644","kubernetes.io/config.mirror":"4b7655de39be7fe33bd9544841473644","kubernetes.io/config.seen":"2023-10-02T12:02:12.749659667Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T12:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1002 12:03:16.506251 2563543 request.go:629] Waited for 193.203285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:16.506332 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-361100
	I1002 12:03:16.506338 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:16.506347 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:16.506359 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:16.508983 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:16.509009 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:16.509017 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:16 GMT
	I1002 12:03:16.509024 2563543 round_trippers.go:580]     Audit-Id: 680b69a6-af14-4a2f-929e-8bf356fdfa1d
	I1002 12:03:16.509030 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:16.509036 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:16.509047 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:16.509055 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:16.509261 2563543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T12:02:09Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 12:03:16.509705 2563543 pod_ready.go:92] pod "kube-scheduler-multinode-361100" in "kube-system" namespace has status "Ready":"True"
	I1002 12:03:16.509720 2563543 pod_ready.go:81] duration metric: took 400.153457ms waiting for pod "kube-scheduler-multinode-361100" in "kube-system" namespace to be "Ready" ...
	I1002 12:03:16.509732 2563543 pod_ready.go:38] duration metric: took 1.200907412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:03:16.509746 2563543 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:03:16.509809 2563543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:03:16.524385 2563543 system_svc.go:56] duration metric: took 14.628382ms WaitForService to wait for kubelet.
	I1002 12:03:16.524462 2563543 kubeadm.go:581] duration metric: took 2.274203162s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:03:16.524501 2563543 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:03:16.705861 2563543 request.go:629] Waited for 181.254176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 12:03:16.705953 2563543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 12:03:16.705964 2563543 round_trippers.go:469] Request Headers:
	I1002 12:03:16.705974 2563543 round_trippers.go:473]     Accept: application/json, */*
	I1002 12:03:16.705982 2563543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 12:03:16.708702 2563543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 12:03:16.708741 2563543 round_trippers.go:577] Response Headers:
	I1002 12:03:16.708750 2563543 round_trippers.go:580]     Audit-Id: a2229a5b-cc59-4027-a2f6-a382496898f7
	I1002 12:03:16.708757 2563543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 12:03:16.708763 2563543 round_trippers.go:580]     Content-Type: application/json
	I1002 12:03:16.708769 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f46ce94e-ab39-47cd-9413-72b4b2d8d98a
	I1002 12:03:16.708779 2563543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f0dd078-9de5-43de-a432-b38b9d4a2a6d
	I1002 12:03:16.708785 2563543 round_trippers.go:580]     Date: Mon, 02 Oct 2023 12:03:16 GMT
	I1002 12:03:16.709033 2563543 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"479"},"items":[{"metadata":{"name":"multinode-361100","uid":"addea54c-14e4-4213-b46d-af4c62d4c4db","resourceVersion":"402","creationTimestamp":"2023-10-02T12:02:09Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-361100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-361100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T12_02_13_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1002 12:03:16.709674 2563543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 12:03:16.709692 2563543 node_conditions.go:123] node cpu capacity is 2
	I1002 12:03:16.709702 2563543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 12:03:16.709706 2563543 node_conditions.go:123] node cpu capacity is 2
	I1002 12:03:16.709711 2563543 node_conditions.go:105] duration metric: took 185.198944ms to run NodePressure ...
	I1002 12:03:16.709721 2563543 start.go:228] waiting for startup goroutines ...
	I1002 12:03:16.709746 2563543 start.go:242] writing updated cluster config ...
	I1002 12:03:16.710056 2563543 ssh_runner.go:195] Run: rm -f paused
	I1002 12:03:16.773078 2563543 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:03:16.776680 2563543 out.go:177] * Done! kubectl is now configured to use "multinode-361100" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 12:02:57 multinode-361100 crio[900]: time="2023-10-02 12:02:57.424596086Z" level=info msg="Starting container: 2c5e8e56b5d671386e280c939127b414bb7a05eadb5fd8efd83f6c791ddb22bc" id=64bd61d0-81f4-4d6a-a0d0-f235bc571c94 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:02:57 multinode-361100 crio[900]: time="2023-10-02 12:02:57.441955219Z" level=info msg="Created container 4d5322ff4b2d43b674169a64ff3644859522818f2314e342b271e7c150057dd9: kube-system/coredns-5dd5756b68-t8gwn/coredns" id=fe2a27ac-4236-4bf8-a909-4e900bc834a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:02:57 multinode-361100 crio[900]: time="2023-10-02 12:02:57.442872578Z" level=info msg="Starting container: 4d5322ff4b2d43b674169a64ff3644859522818f2314e342b271e7c150057dd9" id=cc0a4b65-ac81-40f9-b168-109eada4f5e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:02:57 multinode-361100 crio[900]: time="2023-10-02 12:02:57.447600116Z" level=info msg="Started container" PID=1952 containerID=2c5e8e56b5d671386e280c939127b414bb7a05eadb5fd8efd83f6c791ddb22bc description=kube-system/storage-provisioner/storage-provisioner id=64bd61d0-81f4-4d6a-a0d0-f235bc571c94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6850a5b2f2a9aaf7615fac8352eb0e656809a730c0bf80b30461956eb9045a4e
	Oct 02 12:02:57 multinode-361100 crio[900]: time="2023-10-02 12:02:57.470744322Z" level=info msg="Started container" PID=1963 containerID=4d5322ff4b2d43b674169a64ff3644859522818f2314e342b271e7c150057dd9 description=kube-system/coredns-5dd5756b68-t8gwn/coredns id=cc0a4b65-ac81-40f9-b168-109eada4f5e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b7502787807efa97d140da61f8e5c53ad7486b37828f948cb7efb466b7efd87
	Oct 02 12:03:17 multinode-361100 crio[900]: time="2023-10-02 12:03:17.995577789Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-4tnjh/POD" id=7611a47c-0527-4868-954d-83466ea3bdbe name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 12:03:17 multinode-361100 crio[900]: time="2023-10-02 12:03:17.995637497Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.029284928Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4tnjh Namespace:default ID:248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f UID:903d347b-457a-4f8d-9d19-20bce1b82daa NetNS:/var/run/netns/9977b254-c505-451d-b44d-4f01373eb3b5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.029335513Z" level=info msg="Adding pod default_busybox-5bc68d56bd-4tnjh to CNI network \"kindnet\" (type=ptp)"
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.048696379Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4tnjh Namespace:default ID:248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f UID:903d347b-457a-4f8d-9d19-20bce1b82daa NetNS:/var/run/netns/9977b254-c505-451d-b44d-4f01373eb3b5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.049121315Z" level=info msg="Checking pod default_busybox-5bc68d56bd-4tnjh for CNI network kindnet (type=ptp)"
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.064076294Z" level=info msg="Ran pod sandbox 248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f with infra container: default/busybox-5bc68d56bd-4tnjh/POD" id=7611a47c-0527-4868-954d-83466ea3bdbe name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.065917543Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fe9ebb37-0796-4d1b-8aed-644032ecc081 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.066180649Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fe9ebb37-0796-4d1b-8aed-644032ecc081 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.067608310Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=10bc843a-fbe0-4663-a20f-1c2431a0a409 name=/runtime.v1.ImageService/PullImage
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.069714036Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 02 12:03:18 multinode-361100 crio[900]: time="2023-10-02 12:03:18.808942170Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.280674222Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=10bc843a-fbe0-4663-a20f-1c2431a0a409 name=/runtime.v1.ImageService/PullImage
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.282312690Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fee195dd-1f2d-4a57-bd85-db22b2ea1fc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.283306077Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fee195dd-1f2d-4a57-bd85-db22b2ea1fc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.285473998Z" level=info msg="Creating container: default/busybox-5bc68d56bd-4tnjh/busybox" id=e8449149-202c-4bd7-bfc5-0b6d8a214196 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.285623520Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.408175395Z" level=info msg="Created container efa8ec81753cc18b6bb68379343531e5bf2bb0799ad30eef52c876a34fef4a9f: default/busybox-5bc68d56bd-4tnjh/busybox" id=e8449149-202c-4bd7-bfc5-0b6d8a214196 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.410847898Z" level=info msg="Starting container: efa8ec81753cc18b6bb68379343531e5bf2bb0799ad30eef52c876a34fef4a9f" id=cd49c73f-7e08-4872-b9c7-d464dd49c3de name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:03:20 multinode-361100 crio[900]: time="2023-10-02 12:03:20.421781728Z" level=info msg="Started container" PID=2098 containerID=efa8ec81753cc18b6bb68379343531e5bf2bb0799ad30eef52c876a34fef4a9f description=default/busybox-5bc68d56bd-4tnjh/busybox id=cd49c73f-7e08-4872-b9c7-d464dd49c3de name=/runtime.v1.RuntimeService/StartContainer sandboxID=248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	efa8ec81753cc       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   248cfcb4bdd0b       busybox-5bc68d56bd-4tnjh
	4d5322ff4b2d4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      28 seconds ago       Running             coredns                   0                   1b7502787807e       coredns-5dd5756b68-t8gwn
	2c5e8e56b5d67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      28 seconds ago       Running             storage-provisioner       0                   6850a5b2f2a9a       storage-provisioner
	88c10822f1726       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      59 seconds ago       Running             kube-proxy                0                   b8273345dd382       kube-proxy-6gfcj
	e05058681e222       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      59 seconds ago       Running             kindnet-cni               0                   278ef68d2e144       kindnet-2lbdw
	19cd108d7d1d2       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   8e4fe7f847d2c       kube-controller-manager-multinode-361100
	48120c4e4912f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   47b5a23745707       etcd-multinode-361100
	b8b35c677be3b       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   94e69a6e09863       kube-scheduler-multinode-361100
	7eea9ad917e7c       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   7082e0aebb9fa       kube-apiserver-multinode-361100
	
	* 
	* ==> coredns [4d5322ff4b2d43b674169a64ff3644859522818f2314e342b271e7c150057dd9] <==
	* [INFO] 10.244.0.3:49375 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122109s
	[INFO] 10.244.1.2:46274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112074s
	[INFO] 10.244.1.2:53530 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001069959s
	[INFO] 10.244.1.2:49048 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104115s
	[INFO] 10.244.1.2:44222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079943s
	[INFO] 10.244.1.2:55310 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000925236s
	[INFO] 10.244.1.2:55776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064706s
	[INFO] 10.244.1.2:56890 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069415s
	[INFO] 10.244.1.2:57638 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066347s
	[INFO] 10.244.0.3:43540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107438s
	[INFO] 10.244.0.3:46344 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066413s
	[INFO] 10.244.0.3:33415 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069555s
	[INFO] 10.244.0.3:47787 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088156s
	[INFO] 10.244.1.2:49813 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121345s
	[INFO] 10.244.1.2:54878 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088378s
	[INFO] 10.244.1.2:49814 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098946s
	[INFO] 10.244.1.2:36588 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087278s
	[INFO] 10.244.0.3:37502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111295s
	[INFO] 10.244.0.3:38095 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140078s
	[INFO] 10.244.0.3:35680 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000904583s
	[INFO] 10.244.0.3:38980 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115209s
	[INFO] 10.244.1.2:60618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105592s
	[INFO] 10.244.1.2:59612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076874s
	[INFO] 10.244.1.2:35114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009088s
	[INFO] 10.244.1.2:32929 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079804s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-361100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-361100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=multinode-361100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T12_02_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:02:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-361100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:03:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:02:56 +0000   Mon, 02 Oct 2023 12:02:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:02:56 +0000   Mon, 02 Oct 2023 12:02:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:02:56 +0000   Mon, 02 Oct 2023 12:02:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:02:56 +0000   Mon, 02 Oct 2023 12:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-361100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e061d3a95e7f4848b2f8768b21361a46
	  System UUID:                a1cda9dc-1928-4a6a-aa6f-f9cb4e5c797f
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4tnjh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-t8gwn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-multinode-361100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-2lbdw                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-361100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-361100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-6gfcj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-361100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 59s   kube-proxy       
	  Normal  Starting                 81s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s   kubelet          Node multinode-361100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s   kubelet          Node multinode-361100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 73s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s   kubelet          Node multinode-361100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s   kubelet          Node multinode-361100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s   kubelet          Node multinode-361100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           61s   node-controller  Node multinode-361100 event: Registered Node multinode-361100 in Controller
	  Normal  NodeReady                29s   kubelet          Node multinode-361100 status is now: NodeReady
	
	
	Name:               multinode-361100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-361100-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:03:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-361100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:03:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:03:15 +0000   Mon, 02 Oct 2023 12:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:03:15 +0000   Mon, 02 Oct 2023 12:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:03:15 +0000   Mon, 02 Oct 2023 12:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:03:15 +0000   Mon, 02 Oct 2023 12:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-361100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb47560dbfc4dd0abb0b9557ced9998
	  System UUID:                17671c51-7fb2-41ed-94c2-48e48c26b4a4
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wmx6q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-jrjv5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12s
	  kube-system                 kube-proxy-ntgk4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  12s (x5 over 14s)  kubelet          Node multinode-361100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x5 over 14s)  kubelet          Node multinode-361100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x5 over 14s)  kubelet          Node multinode-361100-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11s                node-controller  Node multinode-361100-m02 event: Registered Node multinode-361100-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-361100-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001115] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001126] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +0.003366] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=000000b9 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000389fe983
	[  +0.001046] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=000000006a94d035
	[  +0.001082] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +2.104034] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=000000000fb754f7
	[  +0.001029] FS-Cache: O-key=[8] 'b2495c0100000000'
	[  +0.000775] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001095] FS-Cache: N-key=[8] 'b2495c0100000000'
	[  +0.359690] FS-Cache: Duplicate cookie detected
	[  +0.000696] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000bde6dc72
	[  +0.001094] FS-Cache: O-key=[8] 'b8495c0100000000'
	[  +0.000774] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000e33aca47
	[  +0.001035] FS-Cache: N-key=[8] 'b8495c0100000000'
	
	* 
	* ==> etcd [48120c4e4912ff5b3706397bf708b886130429a8a8bc39125cf5e81245530447] <==
	* {"level":"info","ts":"2023-10-02T12:02:05.890562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-02T12:02:05.890685Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-02T12:02:05.892139Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T12:02:05.892329Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T12:02:05.896529Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T12:02:05.897235Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T12:02:05.897309Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T12:02:06.228572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T12:02:06.228698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T12:02:06.228738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-02T12:02:06.228795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T12:02:06.228831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T12:02:06.228881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T12:02:06.22892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T12:02:06.232729Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:02:06.236142Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-361100 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:02:06.236228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:02:06.237358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T12:02:06.237635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:02:06.237797Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:02:06.237874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:02:06.237912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:02:06.239057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-02T12:02:06.24457Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T12:02:06.244661Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  12:03:25 up 19:45,  0 users,  load average: 1.49, 1.96, 1.90
	Linux multinode-361100 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [e05058681e22259376465875cdaec40ee2f99b84e1c99ba6683f7e7b8b6bf5f8] <==
	* I1002 12:02:26.147951       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 12:02:26.148028       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1002 12:02:26.148163       1 main.go:116] setting mtu 1500 for CNI 
	I1002 12:02:26.148173       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 12:02:26.148187       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 12:02:56.431897       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1002 12:02:56.446471       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 12:02:56.446498       1 main.go:227] handling current node
	I1002 12:03:06.464799       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 12:03:06.464825       1 main.go:227] handling current node
	I1002 12:03:16.476468       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 12:03:16.476495       1 main.go:227] handling current node
	I1002 12:03:16.476506       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 12:03:16.476512       1 main.go:250] Node multinode-361100-m02 has CIDR [10.244.1.0/24] 
	I1002 12:03:16.476683       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [7eea9ad917e7cfa7bbb9016a2f9c82eb4fae90b6ea9a9b2e8f7dff9679df8d7b] <==
	* I1002 12:02:09.516748       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 12:02:09.515977       1 controller.go:624] quota admission added evaluator for: namespaces
	E1002 12:02:09.528392       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 12:02:09.531120       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 12:02:09.742780       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 12:02:10.029696       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 12:02:10.038377       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 12:02:10.038410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 12:02:10.811476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 12:02:10.885795       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 12:02:11.046897       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 12:02:11.055187       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1002 12:02:11.056409       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 12:02:11.065710       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 12:02:11.458041       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	E1002 12:02:12.565693       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E1002 12:02:12.565745       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1002 12:02:12.566225       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 457.682µs, panicked: false, err: context canceled, panic-reason: <nil>
	E1002 12:02:12.567056       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E1002 12:02:12.568295       1 timeout.go:142] post-timeout activity - time-elapsed: 3.068197ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I1002 12:02:12.683585       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 12:02:12.699119       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 12:02:12.723907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 12:02:25.117762       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 12:02:25.217159       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [19cd108d7d1d2267da20cf289b38372977edeca63965b90d777491a505e34730] <==
	* I1002 12:02:26.162449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.207µs"
	I1002 12:02:56.980835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.825µs"
	I1002 12:02:57.001247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.72µs"
	I1002 12:02:58.046853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.253837ms"
	I1002 12:02:58.047209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.457µs"
	I1002 12:02:59.970288       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1002 12:03:13.321768       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-361100-m02\" does not exist"
	I1002 12:03:13.335408       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-361100-m02" podCIDRs=["10.244.1.0/24"]
	I1002 12:03:13.350010       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ntgk4"
	I1002 12:03:13.365845       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jrjv5"
	I1002 12:03:14.971732       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-361100-m02"
	I1002 12:03:14.971892       1 event.go:307] "Event occurred" object="multinode-361100-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-361100-m02 event: Registered Node multinode-361100-m02 in Controller"
	I1002 12:03:15.293650       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-361100-m02"
	I1002 12:03:17.654193       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1002 12:03:17.666431       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wmx6q"
	I1002 12:03:17.674949       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4tnjh"
	I1002 12:03:17.700998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.136349ms"
	I1002 12:03:17.734925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.790216ms"
	I1002 12:03:17.735084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.321µs"
	I1002 12:03:17.735797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.861µs"
	I1002 12:03:19.985669       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wmx6q" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wmx6q"
	I1002 12:03:20.942025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.491184ms"
	I1002 12:03:20.942139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.4µs"
	I1002 12:03:21.066596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.959113ms"
	I1002 12:03:21.067086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.827µs"
	
	* 
	* ==> kube-proxy [88c10822f172626cd9dc196fc05977414b2b8ccffee287b27ee6ec4c59289966] <==
	* I1002 12:02:26.471216       1 server_others.go:69] "Using iptables proxy"
	I1002 12:02:26.486405       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1002 12:02:26.527640       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 12:02:26.530104       1 server_others.go:152] "Using iptables Proxier"
	I1002 12:02:26.530210       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 12:02:26.530241       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 12:02:26.530348       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 12:02:26.530609       1 server.go:846] "Version info" version="v1.28.2"
	I1002 12:02:26.530893       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:02:26.531737       1 config.go:188] "Starting service config controller"
	I1002 12:02:26.531888       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 12:02:26.531961       1 config.go:97] "Starting endpoint slice config controller"
	I1002 12:02:26.531993       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 12:02:26.532741       1 config.go:315] "Starting node config controller"
	I1002 12:02:26.534520       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 12:02:26.632819       1 shared_informer.go:318] Caches are synced for service config
	I1002 12:02:26.632655       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 12:02:26.636232       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b8b35c677be3b7e604e44467a5a25d42ed07faa687d60f44b0862fbc88526c8b] <==
	* I1002 12:02:09.717279       1 serving.go:348] Generated self-signed cert in-memory
	I1002 12:02:11.211062       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 12:02:11.211095       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:02:11.215423       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 12:02:11.215529       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 12:02:11.215616       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 12:02:11.215662       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 12:02:11.215700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 12:02:11.215735       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:02:11.216107       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 12:02:11.216170       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 12:02:11.316404       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:02:11.316466       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 12:02:11.316568       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.374891    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37-cni-cfg\") pod \"kindnet-2lbdw\" (UID: \"bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37\") " pod="kube-system/kindnet-2lbdw"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.374944    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/356394df-71a5-4114-99de-9d594ec624ca-kube-proxy\") pod \"kube-proxy-6gfcj\" (UID: \"356394df-71a5-4114-99de-9d594ec624ca\") " pod="kube-system/kube-proxy-6gfcj"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.374967    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37-lib-modules\") pod \"kindnet-2lbdw\" (UID: \"bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37\") " pod="kube-system/kindnet-2lbdw"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.374997    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/356394df-71a5-4114-99de-9d594ec624ca-lib-modules\") pod \"kube-proxy-6gfcj\" (UID: \"356394df-71a5-4114-99de-9d594ec624ca\") " pod="kube-system/kube-proxy-6gfcj"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.375025    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsgt\" (UniqueName: \"kubernetes.io/projected/356394df-71a5-4114-99de-9d594ec624ca-kube-api-access-wdsgt\") pod \"kube-proxy-6gfcj\" (UID: \"356394df-71a5-4114-99de-9d594ec624ca\") " pod="kube-system/kube-proxy-6gfcj"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.375051    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/356394df-71a5-4114-99de-9d594ec624ca-xtables-lock\") pod \"kube-proxy-6gfcj\" (UID: \"356394df-71a5-4114-99de-9d594ec624ca\") " pod="kube-system/kube-proxy-6gfcj"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.375072    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37-xtables-lock\") pod \"kindnet-2lbdw\" (UID: \"bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37\") " pod="kube-system/kindnet-2lbdw"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: I1002 12:02:25.375100    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skgc6\" (UniqueName: \"kubernetes.io/projected/bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37-kube-api-access-skgc6\") pod \"kindnet-2lbdw\" (UID: \"bc5f6602-13e3-4c6a-b3ce-4ca28a07bd37\") " pod="kube-system/kindnet-2lbdw"
	Oct 02 12:02:25 multinode-361100 kubelet[1408]: W1002 12:02:25.657510    1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/crio-278ef68d2e1445aec3d59e2ce7ef1c78ec745dc3feb2851d541d52a215c7ea51 WatchSource:0}: Error finding container 278ef68d2e1445aec3d59e2ce7ef1c78ec745dc3feb2851d541d52a215c7ea51: Status 404 returned error can't find the container with id 278ef68d2e1445aec3d59e2ce7ef1c78ec745dc3feb2851d541d52a215c7ea51
	Oct 02 12:02:26 multinode-361100 kubelet[1408]: I1002 12:02:26.975270    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6gfcj" podStartSLOduration=1.9752273759999999 podCreationTimestamp="2023-10-02 12:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 12:02:26.959266481 +0000 UTC m=+14.306351925" watchObservedRunningTime="2023-10-02 12:02:26.975227376 +0000 UTC m=+14.322312829"
	Oct 02 12:02:32 multinode-361100 kubelet[1408]: I1002 12:02:32.866432    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2lbdw" podStartSLOduration=7.866385413 podCreationTimestamp="2023-10-02 12:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 12:02:26.979908768 +0000 UTC m=+14.326994204" watchObservedRunningTime="2023-10-02 12:02:32.866385413 +0000 UTC m=+20.213470857"
	Oct 02 12:02:56 multinode-361100 kubelet[1408]: I1002 12:02:56.953435    1408 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 12:02:56 multinode-361100 kubelet[1408]: I1002 12:02:56.980600    1408 topology_manager.go:215] "Topology Admit Handler" podUID="43f1fa61-5afd-4b63-abf4-f27325b4e897" podNamespace="kube-system" podName="coredns-5dd5756b68-t8gwn"
	Oct 02 12:02:56 multinode-361100 kubelet[1408]: I1002 12:02:56.983375    1408 topology_manager.go:215] "Topology Admit Handler" podUID="5dde50ec-2225-41ca-adeb-ceff5d1717b9" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 12:02:57 multinode-361100 kubelet[1408]: I1002 12:02:57.009227    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kptr\" (UniqueName: \"kubernetes.io/projected/43f1fa61-5afd-4b63-abf4-f27325b4e897-kube-api-access-4kptr\") pod \"coredns-5dd5756b68-t8gwn\" (UID: \"43f1fa61-5afd-4b63-abf4-f27325b4e897\") " pod="kube-system/coredns-5dd5756b68-t8gwn"
	Oct 02 12:02:57 multinode-361100 kubelet[1408]: I1002 12:02:57.009362    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5dde50ec-2225-41ca-adeb-ceff5d1717b9-tmp\") pod \"storage-provisioner\" (UID: \"5dde50ec-2225-41ca-adeb-ceff5d1717b9\") " pod="kube-system/storage-provisioner"
	Oct 02 12:02:57 multinode-361100 kubelet[1408]: I1002 12:02:57.009389    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2br7l\" (UniqueName: \"kubernetes.io/projected/5dde50ec-2225-41ca-adeb-ceff5d1717b9-kube-api-access-2br7l\") pod \"storage-provisioner\" (UID: \"5dde50ec-2225-41ca-adeb-ceff5d1717b9\") " pod="kube-system/storage-provisioner"
	Oct 02 12:02:57 multinode-361100 kubelet[1408]: I1002 12:02:57.009424    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f1fa61-5afd-4b63-abf4-f27325b4e897-config-volume\") pod \"coredns-5dd5756b68-t8gwn\" (UID: \"43f1fa61-5afd-4b63-abf4-f27325b4e897\") " pod="kube-system/coredns-5dd5756b68-t8gwn"
	Oct 02 12:02:57 multinode-361100 kubelet[1408]: W1002 12:02:57.339974    1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/crio-1b7502787807efa97d140da61f8e5c53ad7486b37828f948cb7efb466b7efd87 WatchSource:0}: Error finding container 1b7502787807efa97d140da61f8e5c53ad7486b37828f948cb7efb466b7efd87: Status 404 returned error can't find the container with id 1b7502787807efa97d140da61f8e5c53ad7486b37828f948cb7efb466b7efd87
	Oct 02 12:02:58 multinode-361100 kubelet[1408]: I1002 12:02:58.028150    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.028103125 podCreationTimestamp="2023-10-02 12:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 12:02:58.013593348 +0000 UTC m=+45.360678792" watchObservedRunningTime="2023-10-02 12:02:58.028103125 +0000 UTC m=+45.375188593"
	Oct 02 12:03:17 multinode-361100 kubelet[1408]: I1002 12:03:17.694057    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-t8gwn" podStartSLOduration=52.693987371 podCreationTimestamp="2023-10-02 12:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 12:02:58.028661189 +0000 UTC m=+45.375746625" watchObservedRunningTime="2023-10-02 12:03:17.693987371 +0000 UTC m=+65.041072807"
	Oct 02 12:03:17 multinode-361100 kubelet[1408]: I1002 12:03:17.694332    1408 topology_manager.go:215] "Topology Admit Handler" podUID="903d347b-457a-4f8d-9d19-20bce1b82daa" podNamespace="default" podName="busybox-5bc68d56bd-4tnjh"
	Oct 02 12:03:17 multinode-361100 kubelet[1408]: I1002 12:03:17.810453    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rh9h\" (UniqueName: \"kubernetes.io/projected/903d347b-457a-4f8d-9d19-20bce1b82daa-kube-api-access-6rh9h\") pod \"busybox-5bc68d56bd-4tnjh\" (UID: \"903d347b-457a-4f8d-9d19-20bce1b82daa\") " pod="default/busybox-5bc68d56bd-4tnjh"
	Oct 02 12:03:18 multinode-361100 kubelet[1408]: W1002 12:03:18.059933    1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/crio-248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f WatchSource:0}: Error finding container 248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f: Status 404 returned error can't find the container with id 248cfcb4bdd0bc03dc948dc7d2d8f72f2231bd66602d96028ac7eab0e8e8fc9f
	Oct 02 12:03:22 multinode-361100 kubelet[1408]: E1002 12:03:22.730324    1408 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39882->127.0.0.1:44003: write tcp 127.0.0.1:39882->127.0.0.1:44003: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-361100 -n multinode-361100
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-361100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3978120302.exe start -p running-upgrade-763919 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3978120302.exe start -p running-upgrade-763919 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.262777642s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-763919 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-763919 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.16766297s)

                                                
                                                
-- stdout --
	* [running-upgrade-763919] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-763919 in cluster running-upgrade-763919
	* Pulling base image ...
	* Updating the running docker "running-upgrade-763919" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:18:58.447697 2623271 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:18:58.447968 2623271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:18:58.447997 2623271 out.go:309] Setting ErrFile to fd 2...
	I1002 12:18:58.448019 2623271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:18:58.448316 2623271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:18:58.448812 2623271 out.go:303] Setting JSON to false
	I1002 12:18:58.450035 2623271 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72084,"bootTime":1696177054,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:18:58.450138 2623271 start.go:138] virtualization:  
	I1002 12:18:58.452756 2623271 out.go:177] * [running-upgrade-763919] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:18:58.455362 2623271 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:18:58.457170 2623271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:18:58.456019 2623271 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:18:58.456073 2623271 notify.go:220] Checking for updates...
	I1002 12:18:58.459198 2623271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:18:58.461002 2623271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:18:58.462822 2623271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:18:58.464616 2623271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:18:58.468305 2623271 config.go:182] Loaded profile config "running-upgrade-763919": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:18:58.470909 2623271 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 12:18:58.472771 2623271 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:18:58.508956 2623271 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:18:58.509065 2623271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:18:58.634916 2623271 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1002 12:18:58.643165 2623271 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:64 SystemTime:2023-10-02 12:18:58.612276518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:18:58.643269 2623271 docker.go:294] overlay module found
	I1002 12:18:58.646453 2623271 out.go:177] * Using the docker driver based on existing profile
	I1002 12:18:58.648343 2623271 start.go:298] selected driver: docker
	I1002 12:18:58.648363 2623271 start.go:902] validating driver "docker" against &{Name:running-upgrade-763919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-763919 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.16 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:18:58.648457 2623271 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:18:58.649195 2623271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:18:58.719210 2623271 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:64 SystemTime:2023-10-02 12:18:58.709671004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:18:58.719524 2623271 cni.go:84] Creating CNI manager for ""
	I1002 12:18:58.719542 2623271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:18:58.719552 2623271 start_flags.go:321] config:
	{Name:running-upgrade-763919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-763919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.16 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:18:58.721773 2623271 out.go:177] * Starting control plane node running-upgrade-763919 in cluster running-upgrade-763919
	I1002 12:18:58.723693 2623271 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:18:58.725561 2623271 out.go:177] * Pulling base image ...
	I1002 12:18:58.727281 2623271 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 12:18:58.727336 2623271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 12:18:58.748645 2623271 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1002 12:18:58.748681 2623271 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1002 12:18:58.794216 2623271 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 12:18:58.794378 2623271 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/running-upgrade-763919/config.json ...
	I1002 12:18:58.794476 2623271 cache.go:107] acquiring lock: {Name:mkc887fe5cdb6eeafbff75697289cf8eb6c02b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.794656 2623271 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:18:58.794698 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 12:18:58.794707 2623271 start.go:365] acquiring machines lock for running-upgrade-763919: {Name:mk4cd03a996e093da5df8f1a14ed59f3c8737d60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.794714 2623271 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 244.021µs
	I1002 12:18:58.794725 2623271 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 12:18:58.794747 2623271 start.go:369] acquired machines lock for "running-upgrade-763919" in 26.018µs
	I1002 12:18:58.794763 2623271 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:18:58.794769 2623271 fix.go:54] fixHost starting: 
	I1002 12:18:58.794736 2623271 cache.go:107] acquiring lock: {Name:mk1a662d92affbff9c1cc28ea8291fabf268033e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.794820 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 12:18:58.794861 2623271 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 126.646µs
	I1002 12:18:58.794870 2623271 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1002 12:18:58.794880 2623271 cache.go:107] acquiring lock: {Name:mk35152b0c9ad9d157e9936d1fd586fd9fbfb1d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.794918 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 12:18:58.794923 2623271 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 45.547µs
	I1002 12:18:58.794930 2623271 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1002 12:18:58.794940 2623271 cache.go:107] acquiring lock: {Name:mk15c598155ba70dcfefbe44417167251c9a7443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.794976 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 12:18:58.794983 2623271 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 44.562µs
	I1002 12:18:58.794990 2623271 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1002 12:18:58.794997 2623271 cache.go:107] acquiring lock: {Name:mk25c710a649cce1f9a491d98775e2197b453894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.795024 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 12:18:58.795029 2623271 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 33.354µs
	I1002 12:18:58.795035 2623271 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1002 12:18:58.795044 2623271 cache.go:107] acquiring lock: {Name:mkc9b31b3580328998105e1de3c4bcd3ff89f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.795068 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 12:18:58.795075 2623271 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.555µs
	I1002 12:18:58.795081 2623271 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1002 12:18:58.795090 2623271 cache.go:107] acquiring lock: {Name:mk30b3c60f7d3dd47fd80cb4e6230ec4b1ded053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.795120 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 12:18:58.795124 2623271 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.101µs
	I1002 12:18:58.795132 2623271 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 12:18:58.795142 2623271 cache.go:107] acquiring lock: {Name:mk45dd388dcc21a337ce62e1ca16382f65c6cf0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:18:58.795165 2623271 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 12:18:58.795170 2623271 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.563µs
	I1002 12:18:58.795176 2623271 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1002 12:18:58.795184 2623271 cache.go:87] Successfully saved all images to host disk.
	I1002 12:18:58.795265 2623271 cli_runner.go:164] Run: docker container inspect running-upgrade-763919 --format={{.State.Status}}
	I1002 12:18:58.813716 2623271 fix.go:102] recreateIfNeeded on running-upgrade-763919: state=Running err=<nil>
	W1002 12:18:58.813754 2623271 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:18:58.816055 2623271 out.go:177] * Updating the running docker "running-upgrade-763919" container ...
	I1002 12:18:58.817931 2623271 machine.go:88] provisioning docker machine ...
	I1002 12:18:58.817956 2623271 ubuntu.go:169] provisioning hostname "running-upgrade-763919"
	I1002 12:18:58.818039 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:18:58.836799 2623271 main.go:141] libmachine: Using SSH client type: native
	I1002 12:18:58.837236 2623271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36060 <nil> <nil>}
	I1002 12:18:58.837255 2623271 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-763919 && echo "running-upgrade-763919" | sudo tee /etc/hostname
	I1002 12:18:58.994638 2623271 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763919
	
	I1002 12:18:58.994717 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:18:59.015688 2623271 main.go:141] libmachine: Using SSH client type: native
	I1002 12:18:59.016115 2623271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36060 <nil> <nil>}
	I1002 12:18:59.016137 2623271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-763919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-763919/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-763919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:18:59.166139 2623271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:18:59.166209 2623271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:18:59.166244 2623271 ubuntu.go:177] setting up certificates
	I1002 12:18:59.166282 2623271 provision.go:83] configureAuth start
	I1002 12:18:59.166382 2623271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-763919
	I1002 12:18:59.194709 2623271 provision.go:138] copyHostCerts
	I1002 12:18:59.194772 2623271 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:18:59.194796 2623271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:18:59.194867 2623271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:18:59.194962 2623271 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:18:59.194968 2623271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:18:59.194995 2623271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:18:59.195144 2623271 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:18:59.195152 2623271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:18:59.195186 2623271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:18:59.195250 2623271 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-763919 san=[192.168.59.16 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-763919]
	I1002 12:18:59.398450 2623271 provision.go:172] copyRemoteCerts
	I1002 12:18:59.398576 2623271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:18:59.398657 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:18:59.417992 2623271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36060 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/running-upgrade-763919/id_rsa Username:docker}
	I1002 12:18:59.518860 2623271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 12:18:59.549335 2623271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:18:59.577181 2623271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:18:59.603847 2623271 provision.go:86] duration metric: configureAuth took 437.508251ms
	I1002 12:18:59.603870 2623271 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:18:59.604071 2623271 config.go:182] Loaded profile config "running-upgrade-763919": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:18:59.604174 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:18:59.626649 2623271 main.go:141] libmachine: Using SSH client type: native
	I1002 12:18:59.627081 2623271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36060 <nil> <nil>}
	I1002 12:18:59.627102 2623271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:19:00.496337 2623271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:19:00.496365 2623271 machine.go:91] provisioned docker machine in 1.678418298s
	I1002 12:19:00.496375 2623271 start.go:300] post-start starting for "running-upgrade-763919" (driver="docker")
	I1002 12:19:00.496399 2623271 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:19:00.496464 2623271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:19:00.496512 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:19:00.518253 2623271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36060 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/running-upgrade-763919/id_rsa Username:docker}
	I1002 12:19:00.622762 2623271 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:19:00.627169 2623271 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:19:00.627193 2623271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:19:00.627204 2623271 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:19:00.627211 2623271 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 12:19:00.627222 2623271 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:19:00.627277 2623271 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:19:00.627370 2623271 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:19:00.627469 2623271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:19:00.637439 2623271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:19:00.666097 2623271 start.go:303] post-start completed in 169.704954ms
	I1002 12:19:00.666227 2623271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:19:00.666297 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:19:00.687184 2623271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36060 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/running-upgrade-763919/id_rsa Username:docker}
	I1002 12:19:00.786793 2623271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:19:00.796507 2623271 fix.go:56] fixHost completed within 2.001707116s
	I1002 12:19:00.796616 2623271 start.go:83] releasing machines lock for "running-upgrade-763919", held for 2.001852339s
	I1002 12:19:00.796722 2623271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-763919
	I1002 12:19:00.817003 2623271 ssh_runner.go:195] Run: cat /version.json
	I1002 12:19:00.817059 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:19:00.817110 2623271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:19:00.817178 2623271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-763919
	I1002 12:19:00.850101 2623271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36060 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/running-upgrade-763919/id_rsa Username:docker}
	I1002 12:19:00.855944 2623271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36060 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/running-upgrade-763919/id_rsa Username:docker}
	W1002 12:19:00.953162 2623271 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 12:19:00.953254 2623271 ssh_runner.go:195] Run: systemctl --version
	I1002 12:19:01.045374 2623271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:19:01.387886 2623271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:19:01.394483 2623271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:19:01.418874 2623271 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:19:01.418958 2623271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:19:01.457227 2623271 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 12:19:01.457299 2623271 start.go:469] detecting cgroup driver to use...
	I1002 12:19:01.457346 2623271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:19:01.457426 2623271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	W1002 12:19:01.494486 2623271 cruntime.go:288] disable failed: sudo systemctl stop -f containerd: Process exited with status 1
	stdout:
	
	stderr:
	Job for containerd.service canceled.
	I1002 12:19:01.494599 2623271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	W1002 12:19:01.510462 2623271 crio.go:202] disableOthers: containerd is still active
	I1002 12:19:01.510652 2623271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:19:01.533926 2623271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 12:19:01.534053 2623271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:19:01.552230 2623271 out.go:177] 
	W1002 12:19:01.553932 2623271 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 12:19:01.554017 2623271 out.go:239] * 
	* 
	W1002 12:19:01.555255 2623271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 12:19:01.557128 2623271 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-763919 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-02 12:19:01.594524388 +0000 UTC m=+2401.383934696
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-763919
helpers_test.go:235: (dbg) docker inspect running-upgrade-763919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8",
	        "Created": "2023-10-02T12:18:09.523629423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2620687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T12:18:09.965702567Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8/hosts",
	        "LogPath": "/var/lib/docker/containers/e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8/e729fe497dab9495208f69ff6672d88f6e49272b8ae0ee5e8884f201ca12d8e8-json.log",
	        "Name": "/running-upgrade-763919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-763919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-763919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80e4ada94a74f95eba9e484e0006bc659d2ef51a82ee426ac5705dccf5649eef-init/diff:/var/lib/docker/overlay2/171d4493f46f8c7408ae4471b7c9af7221b748f0df832fc34f5b1c152e320f4c/diff:/var/lib/docker/overlay2/6daf6f8d61663174f03998e478544e537d890a15e00b6eaeb77a19582a2ba623/diff:/var/lib/docker/overlay2/ac2037b9d2bb44d5a148a3ba313d13f7e83d6fc8d14222d5c0350fd5888339c9/diff:/var/lib/docker/overlay2/78f703c84f03a4ee241d0fbe31c8d35dd6f5e16bcb733b72f14df48602e9c4ec/diff:/var/lib/docker/overlay2/63cafaa7657767d167f3d6392702a9d3ba128a1eb9305141149cb0f45c6f5786/diff:/var/lib/docker/overlay2/33f2ad15a415acd90acf6e106a8adbe7f8a832643f1dfd7ae51f4fc6aabaf723/diff:/var/lib/docker/overlay2/557f25359e3f3ac601bed43c01a277915d2b53d67b0775a6a9b71b5daf33b10a/diff:/var/lib/docker/overlay2/8ecb1842888e5154bcfb91d8f7082e7ec6bdd6bbe62fcd018df355e277f64c3c/diff:/var/lib/docker/overlay2/a3dba62c267f925e418a74ebbc103cd6c999d2cd053b5c5e88899af24402108a/diff:/var/lib/docker/overlay2/a3d9b0
8500f5bbdc1b7981cd9e4655c3566a470b7a761627af942106bced21fa/diff:/var/lib/docker/overlay2/27ff4e90d34cf392b721d93618e84f6dfb71cb23dcc36c6133c4319d312ef100/diff:/var/lib/docker/overlay2/1e67831c0610ec4a981f9085b1553d392761f5d5ccf0d72bfd6845d65879cc74/diff:/var/lib/docker/overlay2/30521aec27134ab886d2a2f44acaaf22455b1d7bdd33bf9175cc0416745ce746/diff:/var/lib/docker/overlay2/c45daba8fbaa111e1b32ebfb2f0666d724783ecdd905a6fecfa95df1b82dac41/diff:/var/lib/docker/overlay2/6b8f85984a1feb92ff99473fd35b4333da5a39029937e1d5ee7bd6ea85c4dc5a/diff:/var/lib/docker/overlay2/dd97dacf9714edb4723aa1b6a58a89a0472144010cd26394d86af41b38c65b09/diff:/var/lib/docker/overlay2/781f7bde8325b5bdbfb75f19f03e80dc0a3af0f31550132b6f8350fa1811332f/diff:/var/lib/docker/overlay2/fd951880fabb658b6b01c6b988e0d03b17fc5cace0be9704aeec312127ab7bef/diff:/var/lib/docker/overlay2/e77f0266d356d38817957630b44b226371c9d5073f496adf2ccdbc8f05db971f/diff:/var/lib/docker/overlay2/0c6a75ed8029c31d37cec2c1f6374089c3c49ccd1606dd5b51e5148825d7e133/diff:/var/lib/d
ocker/overlay2/becad536bdb29fa8befc9748296e6c43c65d966c1ab2f95d273ddb43788025b7/diff:/var/lib/docker/overlay2/03ce0af76a2ce19e261ed06a949e3d53cc9e495b0be6460409a0c6b3f9b7c20c/diff:/var/lib/docker/overlay2/9770bd55084342c7e18a0aa3713c2a4c1487f62e35754af68db4e4f82a0f418a/diff:/var/lib/docker/overlay2/89f0b4b1daf89a2c7b3d80d0a7c3cce45944a744350551f405ae37e34da94499/diff:/var/lib/docker/overlay2/96fbd06b09569db7dc5e485e0dd8b09333e5685fc3f37dad3734c49eeb7dc967/diff:/var/lib/docker/overlay2/1f7f2b99ddc7be8e0375129cbaaf5fa0e0601ab256d0385d17ec96514b836bee/diff:/var/lib/docker/overlay2/2a23691a008d98266197b5fe3a466e4645f34776084ae2e594c2ed6f68c7637f/diff:/var/lib/docker/overlay2/11567ecd5b13d18d71f1ae04d487eb1ff2165eb6434f82ba24f315bcea5b6743/diff:/var/lib/docker/overlay2/9af30912fa98987aa12ea5eb6baac05171831798364515841fc2a1f41c0573f0/diff:/var/lib/docker/overlay2/b7b35d67fd23bc76d88e963f00d5ded4f4c9156af1f6b65ad4bbdef476dc7ce9/diff:/var/lib/docker/overlay2/01d57a58c57c4724e39344078ae205a5f502e909814fd7a63ec559d77cf
a9576/diff:/var/lib/docker/overlay2/fa334b3c40a80f5ec45ca3d14e08e547b79f6cb6b4752fd743c03b4bde5d5f8a/diff:/var/lib/docker/overlay2/2ab23e53a5074ca484e0298d211dddda90abb3367b9daee304a13fad75c81ddc/diff:/var/lib/docker/overlay2/2c0f7dd20716c2b3aa74682ef2dd9e533b0e501108db4238825b8c77bd1485fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80e4ada94a74f95eba9e484e0006bc659d2ef51a82ee426ac5705dccf5649eef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80e4ada94a74f95eba9e484e0006bc659d2ef51a82ee426ac5705dccf5649eef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80e4ada94a74f95eba9e484e0006bc659d2ef51a82ee426ac5705dccf5649eef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-763919",
	                "Source": "/var/lib/docker/volumes/running-upgrade-763919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-763919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-763919",
	                "name.minikube.sigs.k8s.io": "running-upgrade-763919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6030759e91cf51e0351e8e17e2ac98c1fd1a2f1191ada2f1087520a1c3a0f8b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36058"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b6030759e91c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-763919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e729fe497dab",
	                        "running-upgrade-763919"
	                    ],
	                    "NetworkID": "03c81a06447430ac1a27decb1ab63350513d705d7d528d1f9f2adcdc95968caa",
	                    "EndpointID": "4cf04ca4681a990d75d86f0a3801084edd79d12186b0dfba84a9ef11bef662d2",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-763919 -n running-upgrade-763919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-763919 -n running-upgrade-763919: exit status 4 (583.034056ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:19:02.116024 2623790 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-763919" does not appear in /home/jenkins/minikube-integration/17340-2494243/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-763919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-763919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-763919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-763919: (2.979738122s)
--- FAIL: TestRunningBinaryUpgrade (69.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2154092689.exe start -p missing-upgrade-402693 --memory=2200 --driver=docker  --container-runtime=crio
E1002 12:14:24.741434 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2154092689.exe start -p missing-upgrade-402693 --memory=2200 --driver=docker  --container-runtime=crio: (1m39.525898084s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-402693
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-402693: (4.422342628s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-402693
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-402693 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-402693 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (41.827233418s)

                                                
                                                
-- stdout --
	* [missing-upgrade-402693] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-402693 in cluster missing-upgrade-402693
	* Pulling base image ...
	* docker "missing-upgrade-402693" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:15:59.850210 2609183 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:15:59.850391 2609183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:59.850399 2609183 out.go:309] Setting ErrFile to fd 2...
	I1002 12:15:59.850405 2609183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:59.850668 2609183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:15:59.851140 2609183 out.go:303] Setting JSON to false
	I1002 12:15:59.852267 2609183 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71906,"bootTime":1696177054,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:15:59.852355 2609183 start.go:138] virtualization:  
	I1002 12:15:59.865404 2609183 out.go:177] * [missing-upgrade-402693] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:15:59.867466 2609183 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:15:59.869667 2609183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:15:59.867622 2609183 notify.go:220] Checking for updates...
	I1002 12:15:59.872057 2609183 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:15:59.874137 2609183 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:15:59.876974 2609183 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:15:59.881437 2609183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:15:59.886308 2609183 config.go:182] Loaded profile config "missing-upgrade-402693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:15:59.888763 2609183 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 12:15:59.891051 2609183 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:15:59.970270 2609183 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:15:59.970370 2609183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:16:00.209242 2609183 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 12:16:00.163913312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:16:00.209379 2609183 docker.go:294] overlay module found
	I1002 12:16:00.213543 2609183 out.go:177] * Using the docker driver based on existing profile
	I1002 12:16:00.215547 2609183 start.go:298] selected driver: docker
	I1002 12:16:00.215602 2609183 start.go:902] validating driver "docker" against &{Name:missing-upgrade-402693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-402693 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.6 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:16:00.215777 2609183 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:16:00.216731 2609183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:16:00.434687 2609183 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 12:16:00.417726334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:16:00.435115 2609183 cni.go:84] Creating CNI manager for ""
	I1002 12:16:00.435139 2609183 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:16:00.435156 2609183 start_flags.go:321] config:
	{Name:missing-upgrade-402693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-402693 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.6 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:16:00.440704 2609183 out.go:177] * Starting control plane node missing-upgrade-402693 in cluster missing-upgrade-402693
	I1002 12:16:00.442565 2609183 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:16:00.444586 2609183 out.go:177] * Pulling base image ...
	I1002 12:16:00.446241 2609183 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 12:16:00.446435 2609183 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 12:16:00.492568 2609183 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1002 12:16:00.492728 2609183 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1002 12:16:00.493282 2609183 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1002 12:16:00.519221 2609183 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 12:16:00.519361 2609183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/missing-upgrade-402693/config.json ...
	I1002 12:16:00.519708 2609183 cache.go:107] acquiring lock: {Name:mkc887fe5cdb6eeafbff75697289cf8eb6c02b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.519783 2609183 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 12:16:00.519792 2609183 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.739µs
	I1002 12:16:00.519801 2609183 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 12:16:00.519809 2609183 cache.go:107] acquiring lock: {Name:mk1a662d92affbff9c1cc28ea8291fabf268033e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.519895 2609183 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1002 12:16:00.522076 2609183 cache.go:107] acquiring lock: {Name:mk30b3c60f7d3dd47fd80cb4e6230ec4b1ded053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.522417 2609183 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1002 12:16:00.522714 2609183 cache.go:107] acquiring lock: {Name:mk15c598155ba70dcfefbe44417167251c9a7443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.522845 2609183 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1002 12:16:00.523073 2609183 cache.go:107] acquiring lock: {Name:mk25c710a649cce1f9a491d98775e2197b453894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.523172 2609183 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1002 12:16:00.523407 2609183 cache.go:107] acquiring lock: {Name:mkc9b31b3580328998105e1de3c4bcd3ff89f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.523516 2609183 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 12:16:00.520042 2609183 cache.go:107] acquiring lock: {Name:mk35152b0c9ad9d157e9936d1fd586fd9fbfb1d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.523842 2609183 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1002 12:16:00.524284 2609183 cache.go:107] acquiring lock: {Name:mk45dd388dcc21a337ce62e1ca16382f65c6cf0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:00.528086 2609183 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1002 12:16:00.529424 2609183 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1002 12:16:00.530131 2609183 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1002 12:16:00.530404 2609183 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 12:16:00.530619 2609183 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1002 12:16:00.530751 2609183 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1002 12:16:00.531324 2609183 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1002 12:16:00.531762 2609183 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1002 12:16:01.011609 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1002 12:16:01.015040 2609183 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1002 12:16:01.015136 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1002 12:16:01.046053 2609183 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1002 12:16:01.046115 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W1002 12:16:01.057838 2609183 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1002 12:16:01.057907 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1002 12:16:01.064360 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1002 12:16:01.067094 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1002 12:16:01.112295 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 12:16:01.112325 2609183 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 588.921605ms
	I1002 12:16:01.112338 2609183 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1002 12:16:01.148303 2609183 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  641.34 KiB / 287.99 MiB [] 0.22% ? p/s ?I1002 12:16:01.671982 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 12:16:01.672076 2609183 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.149372436s
	I1002 12:16:01.672105 2609183 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  16.17 MiB / 287.99 MiB [>] 5.61% ? p/s ?I1002 12:16:01.707385 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 12:16:01.707715 2609183 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.183430457s
	I1002 12:16:01.707764 2609183 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 44.19 MiB I1002 12:16:01.964787 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 12:16:01.964823 2609183 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.444781856s
	I1002 12:16:01.964838 2609183 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1002 12:16:02.019689 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 12:16:02.019767 2609183 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.499956831s
	I1002 12:16:02.019799 2609183 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 44.19 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 44.19 MiB     > gcr.io/k8s-minikube/kicbase...:  27.11 MiB / 287.99 MiB  9.41% 41.46 MiB I1002 12:16:02.501084 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 12:16:02.501167 2609183 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.978096531s
	I1002 12:16:02.501196 2609183 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 41.46 MiB    > gcr.io/k8s-minikube/kicbase...:  46.25 MiB / 287.99 MiB  16.06% 41.46 MiB    > gcr.io/k8s-minikube/kicbase...:  64.54 MiB / 287.99 MiB  22.41% 42.79 MiB    > gcr.io/k8s-minikube/kicbase...:  67.85 MiB / 287.99 MiB  23.56% 42.79 MiB    > gcr.io/k8s-minikube/kicbase...:  82.75 MiB / 287.99 MiB  28.73% 42.79 MiB    > gcr.io/k8s-minikube/kicbase...:  102.22 MiB / 287.99 MiB  35.49% 44.10 Mi    > gcr.io/k8s-minikube/kicbase...:  122.93 MiB / 287.99 MiB  42.69% 44.10 Mi    > gcr.io/k8s-minikube/kicbase...:  143.84 MiB / 287.99 MiB  49.95% 44.10 Mi    > gcr.io/k8s-minikube/kicbase...:  166.24 MiB / 287.99 MiB  57.72% 48.14 Mi    > gcr.io/k8s-minikube/kicbase...:  171.77 MiB / 287.99 MiB  59.64% 48.14 MiI1002 12:16:04.671101 2609183 cache.go:157] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 12:16:04.671128 2609183 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.14906536s
	I1002 12:16:04.671141 2609183 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 12:16:04.671155 2609183 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  188.51 MiB / 287.99 MiB  65.46% 48.14 Mi    > gcr.io/k8s-minikube/kicbase...:  208.57 MiB / 287.99 MiB  72.42% 49.59 Mi    > gcr.io/k8s-minikube/kicbase...:  213.62 MiB / 287.99 MiB  74.18% 49.59 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 49.59 Mi    > gcr.io/k8s-minikube/kicbase...:  240.21 MiB / 287.99 MiB  83.41% 49.79 Mi    > gcr.io/k8s-minikube/kicbase...:  262.57 MiB / 287.99 MiB  91.17% 49.79 Mi    > gcr.io/k8s-minikube/kicbase...:  268.16 MiB / 287.99 MiB  93.11% 49.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 51.71 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 51.71 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 51.71 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.38 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.38 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.
99% 48.38 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 45.26 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 45.26 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 45.94 MI1002 12:16:07.556002 2609183 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1002 12:16:07.556057 2609183 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1002 12:16:07.751297 2609183 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1002 12:16:07.751341 2609183 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:16:07.751409 2609183 start.go:365] acquiring machines lock for missing-upgrade-402693: {Name:mk08d0e3a7eb003a1b2ee5c21ccd673b740342e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:16:07.751485 2609183 start.go:369] acquired machines lock for "missing-upgrade-402693" in 46.023µs
	I1002 12:16:07.751507 2609183 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:16:07.751519 2609183 fix.go:54] fixHost starting: 
	I1002 12:16:07.751790 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:07.769857 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:07.769922 2609183 fix.go:102] recreateIfNeeded on missing-upgrade-402693: state= err=unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:07.769945 2609183 fix.go:107] machineExists: false. err=machine does not exist
	I1002 12:16:07.772433 2609183 out.go:177] * docker "missing-upgrade-402693" container is missing, will recreate.
	I1002 12:16:07.774221 2609183 delete.go:124] DEMOLISHING missing-upgrade-402693 ...
	I1002 12:16:07.774329 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:07.799761 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	W1002 12:16:07.799823 2609183 stop.go:75] unable to get state: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:07.799844 2609183 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:07.800310 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:07.825618 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:07.825680 2609183 delete.go:82] Unable to get host status for missing-upgrade-402693, assuming it has already been deleted: state: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:07.825746 2609183 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-402693
	W1002 12:16:07.850461 2609183 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-402693 returned with exit code 1
	I1002 12:16:07.850493 2609183 kic.go:367] could not find the container missing-upgrade-402693 to remove it. will try anyways
	I1002 12:16:07.850547 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:07.870849 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	W1002 12:16:07.870904 2609183 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:07.870971 2609183 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-402693 /bin/bash -c "sudo init 0"
	W1002 12:16:07.889962 2609183 cli_runner.go:211] docker exec --privileged -t missing-upgrade-402693 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 12:16:07.889995 2609183 oci.go:647] error shutdown missing-upgrade-402693: docker exec --privileged -t missing-upgrade-402693 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:08.890271 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:08.926091 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:08.926155 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:08.926169 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:08.926209 2609183 retry.go:31] will retry after 321.403482ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:09.248651 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:09.272241 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:09.272317 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:09.272328 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:09.272354 2609183 retry.go:31] will retry after 793.424548ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:10.066638 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:10.097695 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:10.097769 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:10.097784 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:10.097811 2609183 retry.go:31] will retry after 1.334871957s: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:11.432917 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:11.451417 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:11.451479 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:11.451489 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:11.451514 2609183 retry.go:31] will retry after 971.184153ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:12.422893 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:12.444471 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:12.444551 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:12.444566 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:12.444591 2609183 retry.go:31] will retry after 3.165902848s: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:15.611441 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:15.634240 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:15.634298 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:15.634308 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:15.634332 2609183 retry.go:31] will retry after 5.242221349s: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:20.876740 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:20.915562 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:20.915622 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:20.915633 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:20.915664 2609183 retry.go:31] will retry after 6.882042679s: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:27.800643 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:27.828829 2609183 cli_runner.go:211] docker container inspect missing-upgrade-402693 --format={{.State.Status}} returned with exit code 1
	I1002 12:16:27.828885 2609183 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	I1002 12:16:27.828894 2609183 oci.go:661] temporary error: container missing-upgrade-402693 status is  but expect it to be exited
	I1002 12:16:27.828942 2609183 oci.go:88] couldn't shut down missing-upgrade-402693 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-402693": docker container inspect missing-upgrade-402693 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-402693
	 
	I1002 12:16:27.829000 2609183 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-402693
	I1002 12:16:27.854091 2609183 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-402693
	W1002 12:16:27.894007 2609183 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-402693 returned with exit code 1
	I1002 12:16:27.894095 2609183 cli_runner.go:164] Run: docker network inspect missing-upgrade-402693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:16:27.923508 2609183 cli_runner.go:164] Run: docker network rm missing-upgrade-402693
	I1002 12:16:28.078503 2609183 fix.go:114] Sleeping 1 second for extra luck!
	I1002 12:16:29.079268 2609183 start.go:125] createHost starting for "" (driver="docker")
	I1002 12:16:29.081483 2609183 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 12:16:29.081637 2609183 start.go:159] libmachine.API.Create for "missing-upgrade-402693" (driver="docker")
	I1002 12:16:29.081664 2609183 client.go:168] LocalClient.Create starting
	I1002 12:16:29.081741 2609183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem
	I1002 12:16:29.081781 2609183 main.go:141] libmachine: Decoding PEM data...
	I1002 12:16:29.081804 2609183 main.go:141] libmachine: Parsing certificate...
	I1002 12:16:29.081864 2609183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem
	I1002 12:16:29.081887 2609183 main.go:141] libmachine: Decoding PEM data...
	I1002 12:16:29.081903 2609183 main.go:141] libmachine: Parsing certificate...
	I1002 12:16:29.082167 2609183 cli_runner.go:164] Run: docker network inspect missing-upgrade-402693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 12:16:29.106391 2609183 cli_runner.go:211] docker network inspect missing-upgrade-402693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 12:16:29.106484 2609183 network_create.go:281] running [docker network inspect missing-upgrade-402693] to gather additional debugging logs...
	I1002 12:16:29.106505 2609183 cli_runner.go:164] Run: docker network inspect missing-upgrade-402693
	W1002 12:16:29.132620 2609183 cli_runner.go:211] docker network inspect missing-upgrade-402693 returned with exit code 1
	I1002 12:16:29.132659 2609183 network_create.go:284] error running [docker network inspect missing-upgrade-402693]: docker network inspect missing-upgrade-402693: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-402693 not found
	I1002 12:16:29.132672 2609183 network_create.go:286] output of [docker network inspect missing-upgrade-402693]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-402693 not found
	
	** /stderr **
	I1002 12:16:29.132735 2609183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:16:29.163229 2609183 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad66715ded82 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d8:bc:22:f4} reservation:<nil>}
	I1002 12:16:29.163899 2609183 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1637dc8803f8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:ae:68:89} reservation:<nil>}
	I1002 12:16:29.164418 2609183 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e6e6af15f6ae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:19:eb:29:25} reservation:<nil>}
	I1002 12:16:29.165006 2609183 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-96badd4d7cc6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f4:00:dd:a6} reservation:<nil>}
	I1002 12:16:29.165690 2609183 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b8590}
	I1002 12:16:29.165741 2609183 network_create.go:123] attempt to create docker network missing-upgrade-402693 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 12:16:29.165839 2609183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-402693 missing-upgrade-402693
	I1002 12:16:29.260882 2609183 network_create.go:107] docker network missing-upgrade-402693 192.168.85.0/24 created
	I1002 12:16:29.260910 2609183 kic.go:117] calculated static IP "192.168.85.2" for the "missing-upgrade-402693" container
	I1002 12:16:29.260986 2609183 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 12:16:29.291343 2609183 cli_runner.go:164] Run: docker volume create missing-upgrade-402693 --label name.minikube.sigs.k8s.io=missing-upgrade-402693 --label created_by.minikube.sigs.k8s.io=true
	I1002 12:16:29.323321 2609183 oci.go:103] Successfully created a docker volume missing-upgrade-402693
	I1002 12:16:29.323406 2609183 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-402693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-402693 --entrypoint /usr/bin/test -v missing-upgrade-402693:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1002 12:16:29.953632 2609183 oci.go:107] Successfully prepared a docker volume missing-upgrade-402693
	I1002 12:16:29.953657 2609183 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1002 12:16:29.953924 2609183 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 12:16:29.954058 2609183 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 12:16:30.081833 2609183 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-402693 --name missing-upgrade-402693 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-402693 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-402693 --network missing-upgrade-402693 --ip 192.168.85.2 --volume missing-upgrade-402693:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1002 12:16:30.494201 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Running}}
	I1002 12:16:30.519764 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	I1002 12:16:30.544410 2609183 cli_runner.go:164] Run: docker exec missing-upgrade-402693 stat /var/lib/dpkg/alternatives/iptables
	I1002 12:16:30.651423 2609183 oci.go:144] the created container "missing-upgrade-402693" has a running status.
	I1002 12:16:30.651449 2609183 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa...
	I1002 12:16:33.295379 2609183 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 12:16:33.320281 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	I1002 12:16:33.354842 2609183 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 12:16:33.354862 2609183 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-402693 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 12:16:33.449687 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	I1002 12:16:33.484856 2609183 machine.go:88] provisioning docker machine ...
	I1002 12:16:33.484903 2609183 ubuntu.go:169] provisioning hostname "missing-upgrade-402693"
	I1002 12:16:33.484996 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:33.508686 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:33.509123 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:33.509136 2609183 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-402693 && echo "missing-upgrade-402693" | sudo tee /etc/hostname
	I1002 12:16:33.713483 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-402693
	
	I1002 12:16:33.713600 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:33.744282 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:33.744825 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:33.744850 2609183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-402693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-402693/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-402693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:16:33.902401 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:16:33.902426 2609183 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:16:33.902445 2609183 ubuntu.go:177] setting up certificates
	I1002 12:16:33.902454 2609183 provision.go:83] configureAuth start
	I1002 12:16:33.902521 2609183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-402693
	I1002 12:16:33.922847 2609183 provision.go:138] copyHostCerts
	I1002 12:16:33.922907 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:16:33.922915 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:16:33.922987 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:16:33.923072 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:16:33.923104 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:16:33.923131 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:16:33.923181 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:16:33.923185 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:16:33.923208 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:16:33.923261 2609183 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-402693 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-402693]
	I1002 12:16:34.328422 2609183 provision.go:172] copyRemoteCerts
	I1002 12:16:34.328559 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:16:34.328627 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:34.347347 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:34.447449 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:16:34.474603 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 12:16:34.502382 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:16:34.529996 2609183 provision.go:86] duration metric: configureAuth took 627.528396ms
	I1002 12:16:34.530028 2609183 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:16:34.530255 2609183 config.go:182] Loaded profile config "missing-upgrade-402693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:16:34.530404 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:34.558014 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:34.558434 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:34.558457 2609183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:16:37.602026 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:16:37.602059 2609183 machine.go:91] provisioned docker machine in 4.117169555s
	I1002 12:16:37.602070 2609183 client.go:171] LocalClient.Create took 8.520396664s
	I1002 12:16:37.602083 2609183 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-402693" took 8.520448406s
	I1002 12:16:37.602091 2609183 start.go:300] post-start starting for "missing-upgrade-402693" (driver="docker")
	I1002 12:16:37.602108 2609183 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:16:37.602183 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:16:37.602233 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:37.639505 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:37.755182 2609183 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:16:37.759267 2609183 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:16:37.759298 2609183 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:16:37.759311 2609183 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:16:37.759319 2609183 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 12:16:37.759330 2609183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:16:37.759384 2609183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:16:37.759472 2609183 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:16:37.759581 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:16:37.769750 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:16:37.804329 2609183 start.go:303] post-start completed in 202.214078ms
	I1002 12:16:37.804803 2609183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-402693
	I1002 12:16:37.829968 2609183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/missing-upgrade-402693/config.json ...
	I1002 12:16:37.830328 2609183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:16:37.830434 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:37.857882 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:37.962450 2609183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:16:37.967948 2609183 start.go:128] duration metric: createHost completed in 8.888643285s
	I1002 12:16:37.968044 2609183 cli_runner.go:164] Run: docker container inspect missing-upgrade-402693 --format={{.State.Status}}
	W1002 12:16:38.003087 2609183 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:16:38.003117 2609183 machine.go:88] provisioning docker machine ...
	I1002 12:16:38.003136 2609183 ubuntu.go:169] provisioning hostname "missing-upgrade-402693"
	I1002 12:16:38.003211 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:38.043734 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:38.044742 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:38.044767 2609183 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-402693 && echo "missing-upgrade-402693" | sudo tee /etc/hostname
	I1002 12:16:38.293098 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-402693
	
	I1002 12:16:38.293181 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:38.319568 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:38.319971 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:38.319989 2609183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-402693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-402693/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-402693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:16:38.491510 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:16:38.491542 2609183 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:16:38.491561 2609183 ubuntu.go:177] setting up certificates
	I1002 12:16:38.491571 2609183 provision.go:83] configureAuth start
	I1002 12:16:38.491645 2609183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-402693
	I1002 12:16:38.532800 2609183 provision.go:138] copyHostCerts
	I1002 12:16:38.532860 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:16:38.532868 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:16:38.532942 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:16:38.533037 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:16:38.533042 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:16:38.533068 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:16:38.533127 2609183 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:16:38.533132 2609183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:16:38.533157 2609183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:16:38.533206 2609183 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-402693 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-402693]
	I1002 12:16:39.539158 2609183 provision.go:172] copyRemoteCerts
	I1002 12:16:39.539227 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:16:39.539270 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:39.564709 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:39.677987 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:16:39.743774 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 12:16:39.771235 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 12:16:39.798071 2609183 provision.go:86] duration metric: configureAuth took 1.306486347s
	I1002 12:16:39.798101 2609183 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:16:39.798279 2609183 config.go:182] Loaded profile config "missing-upgrade-402693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:16:39.798384 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:39.825259 2609183 main.go:141] libmachine: Using SSH client type: native
	I1002 12:16:39.825661 2609183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36041 <nil> <nil>}
	I1002 12:16:39.825675 2609183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:16:40.184128 2609183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:16:40.184162 2609183 machine.go:91] provisioned docker machine in 2.181025174s
	I1002 12:16:40.184173 2609183 start.go:300] post-start starting for "missing-upgrade-402693" (driver="docker")
	I1002 12:16:40.184185 2609183 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:16:40.184258 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:16:40.184312 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:40.217796 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:40.318040 2609183 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:16:40.322398 2609183 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:16:40.322426 2609183 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:16:40.322438 2609183 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:16:40.322446 2609183 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 12:16:40.322455 2609183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:16:40.322515 2609183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:16:40.322597 2609183 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:16:40.322702 2609183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:16:40.332000 2609183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:16:40.355933 2609183 start.go:303] post-start completed in 171.743416ms
	I1002 12:16:40.356018 2609183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:16:40.356062 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:40.383519 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:40.486501 2609183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:16:40.492142 2609183 fix.go:56] fixHost completed within 32.740614713s
	I1002 12:16:40.492167 2609183 start.go:83] releasing machines lock for "missing-upgrade-402693", held for 32.740673742s
	I1002 12:16:40.492240 2609183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-402693
	I1002 12:16:40.510752 2609183 ssh_runner.go:195] Run: cat /version.json
	I1002 12:16:40.510790 2609183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:16:40.510804 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:40.510849 2609183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-402693
	I1002 12:16:40.542671 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	I1002 12:16:40.548724 2609183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36041 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/missing-upgrade-402693/id_rsa Username:docker}
	W1002 12:16:40.748085 2609183 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 12:16:40.748164 2609183 ssh_runner.go:195] Run: systemctl --version
	I1002 12:16:40.754904 2609183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:16:40.882473 2609183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:16:40.888278 2609183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:16:40.908427 2609183 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:16:40.908572 2609183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:16:40.950195 2609183 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 12:16:40.950223 2609183 start.go:469] detecting cgroup driver to use...
	I1002 12:16:40.950254 2609183 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:16:40.950311 2609183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:16:40.991696 2609183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:16:41.005963 2609183 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:16:41.006033 2609183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:16:41.020801 2609183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:16:41.038860 2609183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 12:16:41.053211 2609183 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 12:16:41.053279 2609183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:16:41.184119 2609183 docker.go:213] disabling docker service ...
	I1002 12:16:41.184183 2609183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:16:41.200632 2609183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:16:41.213791 2609183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:16:41.350900 2609183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:16:41.489124 2609183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:16:41.510981 2609183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:16:41.539613 2609183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 12:16:41.539685 2609183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:16:41.553198 2609183 out.go:177] 
	W1002 12:16:41.555332 2609183 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 12:16:41.555405 2609183 out.go:239] * 
	* 
	W1002 12:16:41.556358 2609183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 12:16:41.557670 2609183 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-402693 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-02 12:16:41.612928392 +0000 UTC m=+2261.402338708
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-402693
helpers_test.go:235: (dbg) docker inspect missing-upgrade-402693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7",
	        "Created": "2023-10-02T12:16:30.102476467Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2611467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T12:16:30.483211762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7/hosts",
	        "LogPath": "/var/lib/docker/containers/71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7/71d59076583abb25f1f63609d45a8f9fbf5d611706b50c9e31c4f2c9c47dc5c7-json.log",
	        "Name": "/missing-upgrade-402693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-402693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-402693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2ed88e36428a7532cdf5f1f79e4f0e5af482a701566d1b8bdff64fb6440ee774-init/diff:/var/lib/docker/overlay2/171d4493f46f8c7408ae4471b7c9af7221b748f0df832fc34f5b1c152e320f4c/diff:/var/lib/docker/overlay2/6daf6f8d61663174f03998e478544e537d890a15e00b6eaeb77a19582a2ba623/diff:/var/lib/docker/overlay2/ac2037b9d2bb44d5a148a3ba313d13f7e83d6fc8d14222d5c0350fd5888339c9/diff:/var/lib/docker/overlay2/78f703c84f03a4ee241d0fbe31c8d35dd6f5e16bcb733b72f14df48602e9c4ec/diff:/var/lib/docker/overlay2/63cafaa7657767d167f3d6392702a9d3ba128a1eb9305141149cb0f45c6f5786/diff:/var/lib/docker/overlay2/33f2ad15a415acd90acf6e106a8adbe7f8a832643f1dfd7ae51f4fc6aabaf723/diff:/var/lib/docker/overlay2/557f25359e3f3ac601bed43c01a277915d2b53d67b0775a6a9b71b5daf33b10a/diff:/var/lib/docker/overlay2/8ecb1842888e5154bcfb91d8f7082e7ec6bdd6bbe62fcd018df355e277f64c3c/diff:/var/lib/docker/overlay2/a3dba62c267f925e418a74ebbc103cd6c999d2cd053b5c5e88899af24402108a/diff:/var/lib/docker/overlay2/a3d9b0
8500f5bbdc1b7981cd9e4655c3566a470b7a761627af942106bced21fa/diff:/var/lib/docker/overlay2/27ff4e90d34cf392b721d93618e84f6dfb71cb23dcc36c6133c4319d312ef100/diff:/var/lib/docker/overlay2/1e67831c0610ec4a981f9085b1553d392761f5d5ccf0d72bfd6845d65879cc74/diff:/var/lib/docker/overlay2/30521aec27134ab886d2a2f44acaaf22455b1d7bdd33bf9175cc0416745ce746/diff:/var/lib/docker/overlay2/c45daba8fbaa111e1b32ebfb2f0666d724783ecdd905a6fecfa95df1b82dac41/diff:/var/lib/docker/overlay2/6b8f85984a1feb92ff99473fd35b4333da5a39029937e1d5ee7bd6ea85c4dc5a/diff:/var/lib/docker/overlay2/dd97dacf9714edb4723aa1b6a58a89a0472144010cd26394d86af41b38c65b09/diff:/var/lib/docker/overlay2/781f7bde8325b5bdbfb75f19f03e80dc0a3af0f31550132b6f8350fa1811332f/diff:/var/lib/docker/overlay2/fd951880fabb658b6b01c6b988e0d03b17fc5cace0be9704aeec312127ab7bef/diff:/var/lib/docker/overlay2/e77f0266d356d38817957630b44b226371c9d5073f496adf2ccdbc8f05db971f/diff:/var/lib/docker/overlay2/0c6a75ed8029c31d37cec2c1f6374089c3c49ccd1606dd5b51e5148825d7e133/diff:/var/lib/d
ocker/overlay2/becad536bdb29fa8befc9748296e6c43c65d966c1ab2f95d273ddb43788025b7/diff:/var/lib/docker/overlay2/03ce0af76a2ce19e261ed06a949e3d53cc9e495b0be6460409a0c6b3f9b7c20c/diff:/var/lib/docker/overlay2/9770bd55084342c7e18a0aa3713c2a4c1487f62e35754af68db4e4f82a0f418a/diff:/var/lib/docker/overlay2/89f0b4b1daf89a2c7b3d80d0a7c3cce45944a744350551f405ae37e34da94499/diff:/var/lib/docker/overlay2/96fbd06b09569db7dc5e485e0dd8b09333e5685fc3f37dad3734c49eeb7dc967/diff:/var/lib/docker/overlay2/1f7f2b99ddc7be8e0375129cbaaf5fa0e0601ab256d0385d17ec96514b836bee/diff:/var/lib/docker/overlay2/2a23691a008d98266197b5fe3a466e4645f34776084ae2e594c2ed6f68c7637f/diff:/var/lib/docker/overlay2/11567ecd5b13d18d71f1ae04d487eb1ff2165eb6434f82ba24f315bcea5b6743/diff:/var/lib/docker/overlay2/9af30912fa98987aa12ea5eb6baac05171831798364515841fc2a1f41c0573f0/diff:/var/lib/docker/overlay2/b7b35d67fd23bc76d88e963f00d5ded4f4c9156af1f6b65ad4bbdef476dc7ce9/diff:/var/lib/docker/overlay2/01d57a58c57c4724e39344078ae205a5f502e909814fd7a63ec559d77cf
a9576/diff:/var/lib/docker/overlay2/fa334b3c40a80f5ec45ca3d14e08e547b79f6cb6b4752fd743c03b4bde5d5f8a/diff:/var/lib/docker/overlay2/2ab23e53a5074ca484e0298d211dddda90abb3367b9daee304a13fad75c81ddc/diff:/var/lib/docker/overlay2/2c0f7dd20716c2b3aa74682ef2dd9e533b0e501108db4238825b8c77bd1485fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ed88e36428a7532cdf5f1f79e4f0e5af482a701566d1b8bdff64fb6440ee774/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ed88e36428a7532cdf5f1f79e4f0e5af482a701566d1b8bdff64fb6440ee774/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ed88e36428a7532cdf5f1f79e4f0e5af482a701566d1b8bdff64fb6440ee774/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-402693",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-402693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-402693",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-402693",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-402693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5711233ecba94b253322a91193e1cbab553e46ad13249bcef3519658af2133e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36041"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36040"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36038"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5711233ecba9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-402693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "71d59076583a",
	                        "missing-upgrade-402693"
	                    ],
	                    "NetworkID": "137f2f865e3f9f3bdbc84889d112714458e45d40b7a948cdcd2dd4f82d4b4165",
	                    "EndpointID": "0f4df059a6b49a4e63a5edd42e22c3747611a5f31762fe7626b3140c5c7c6371",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-402693 -n missing-upgrade-402693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-402693 -n missing-upgrade-402693: exit status 6 (478.668339ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:16:42.118805 2614045 status.go:415] kubeconfig endpoint: got: 192.168.59.6:8443, want: 192.168.85.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-402693" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-402693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-402693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-402693: (2.334568299s)
--- FAIL: TestMissingContainerUpgrade (149.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (71.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3827657864.exe start -p stopped-upgrade-998345 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1002 12:21:56.257174 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3827657864.exe start -p stopped-upgrade-998345 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.815721523s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3827657864.exe -p stopped-upgrade-998345 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3827657864.exe -p stopped-upgrade-998345 stop: (2.141182195s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-998345 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-998345 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.015048084s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-998345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-998345 in cluster stopped-upgrade-998345
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-998345" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:22:03.368791 2634178 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:22:03.369014 2634178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:22:03.369021 2634178 out.go:309] Setting ErrFile to fd 2...
	I1002 12:22:03.369027 2634178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:22:03.369393 2634178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:22:03.370876 2634178 out.go:303] Setting JSON to false
	I1002 12:22:03.372186 2634178 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72269,"bootTime":1696177054,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:22:03.372299 2634178 start.go:138] virtualization:  
	I1002 12:22:03.376031 2634178 out.go:177] * [stopped-upgrade-998345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:22:03.378126 2634178 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:22:03.384254 2634178 notify.go:220] Checking for updates...
	I1002 12:22:03.388317 2634178 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:22:03.391153 2634178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:22:03.393524 2634178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:22:03.395537 2634178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:22:03.397282 2634178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:22:03.400474 2634178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:22:03.403117 2634178 config.go:182] Loaded profile config "stopped-upgrade-998345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:22:03.406733 2634178 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 12:22:03.409113 2634178 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:22:03.508864 2634178 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:22:03.508982 2634178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:22:03.598950 2634178 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1002 12:22:03.668817 2634178 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 12:22:03.657261411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:22:03.668927 2634178 docker.go:294] overlay module found
	I1002 12:22:03.673564 2634178 out.go:177] * Using the docker driver based on existing profile
	I1002 12:22:03.675520 2634178 start.go:298] selected driver: docker
	I1002 12:22:03.675536 2634178 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-998345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-998345 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.202 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:22:03.675749 2634178 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:22:03.676663 2634178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:22:03.816106 2634178 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 12:22:03.801068843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:22:03.816414 2634178 cni.go:84] Creating CNI manager for ""
	I1002 12:22:03.816424 2634178 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:22:03.816433 2634178 start_flags.go:321] config:
	{Name:stopped-upgrade-998345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-998345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.202 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 12:22:03.818792 2634178 out.go:177] * Starting control plane node stopped-upgrade-998345 in cluster stopped-upgrade-998345
	I1002 12:22:03.821108 2634178 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:22:03.823315 2634178 out.go:177] * Pulling base image ...
	I1002 12:22:03.825261 2634178 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 12:22:03.825430 2634178 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 12:22:03.857708 2634178 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1002 12:22:03.857734 2634178 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1002 12:22:03.905445 2634178 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 12:22:03.905590 2634178 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/stopped-upgrade-998345/config.json ...
	I1002 12:22:03.905857 2634178 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:22:03.905901 2634178 start.go:365] acquiring machines lock for stopped-upgrade-998345: {Name:mke7311d044375fc9f1596184aac83b52d8da282 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.905960 2634178 start.go:369] acquired machines lock for "stopped-upgrade-998345" in 32.533µs
	I1002 12:22:03.905980 2634178 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:22:03.905989 2634178 fix.go:54] fixHost starting: 
	I1002 12:22:03.906259 2634178 cli_runner.go:164] Run: docker container inspect stopped-upgrade-998345 --format={{.State.Status}}
	I1002 12:22:03.906503 2634178 cache.go:107] acquiring lock: {Name:mkc887fe5cdb6eeafbff75697289cf8eb6c02b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906575 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 12:22:03.906588 2634178 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.322µs
	I1002 12:22:03.906597 2634178 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 12:22:03.906608 2634178 cache.go:107] acquiring lock: {Name:mk1a662d92affbff9c1cc28ea8291fabf268033e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906644 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 12:22:03.906654 2634178 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 47.032µs
	I1002 12:22:03.906661 2634178 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1002 12:22:03.906668 2634178 cache.go:107] acquiring lock: {Name:mk35152b0c9ad9d157e9936d1fd586fd9fbfb1d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906697 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 12:22:03.906706 2634178 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 39.073µs
	I1002 12:22:03.906714 2634178 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1002 12:22:03.906721 2634178 cache.go:107] acquiring lock: {Name:mk15c598155ba70dcfefbe44417167251c9a7443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906751 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 12:22:03.906760 2634178 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 40.837µs
	I1002 12:22:03.906767 2634178 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1002 12:22:03.906775 2634178 cache.go:107] acquiring lock: {Name:mk25c710a649cce1f9a491d98775e2197b453894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906804 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 12:22:03.906813 2634178 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 40.911µs
	I1002 12:22:03.906829 2634178 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1002 12:22:03.906836 2634178 cache.go:107] acquiring lock: {Name:mkc9b31b3580328998105e1de3c4bcd3ff89f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906870 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 12:22:03.906879 2634178 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 43.569µs
	I1002 12:22:03.906886 2634178 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1002 12:22:03.906902 2634178 cache.go:107] acquiring lock: {Name:mk30b3c60f7d3dd47fd80cb4e6230ec4b1ded053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906931 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 12:22:03.906939 2634178 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 44.349µs
	I1002 12:22:03.906947 2634178 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 12:22:03.906954 2634178 cache.go:107] acquiring lock: {Name:mk45dd388dcc21a337ce62e1ca16382f65c6cf0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:22:03.906982 2634178 cache.go:115] /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 12:22:03.906991 2634178 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 38.031µs
	I1002 12:22:03.906998 2634178 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1002 12:22:03.907003 2634178 cache.go:87] Successfully saved all images to host disk.
	I1002 12:22:03.926959 2634178 fix.go:102] recreateIfNeeded on stopped-upgrade-998345: state=Stopped err=<nil>
	W1002 12:22:03.926994 2634178 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:22:03.929189 2634178 out.go:177] * Restarting existing docker container for "stopped-upgrade-998345" ...
	I1002 12:22:03.932961 2634178 cli_runner.go:164] Run: docker start stopped-upgrade-998345
	I1002 12:22:04.285424 2634178 cli_runner.go:164] Run: docker container inspect stopped-upgrade-998345 --format={{.State.Status}}
	I1002 12:22:04.314012 2634178 kic.go:426] container "stopped-upgrade-998345" state is running.
	I1002 12:22:04.314389 2634178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-998345
	I1002 12:22:04.338730 2634178 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/stopped-upgrade-998345/config.json ...
	I1002 12:22:04.338973 2634178 machine.go:88] provisioning docker machine ...
	I1002 12:22:04.338996 2634178 ubuntu.go:169] provisioning hostname "stopped-upgrade-998345"
	I1002 12:22:04.339052 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:04.363848 2634178 main.go:141] libmachine: Using SSH client type: native
	I1002 12:22:04.364275 2634178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36078 <nil> <nil>}
	I1002 12:22:04.364290 2634178 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-998345 && echo "stopped-upgrade-998345" | sudo tee /etc/hostname
	I1002 12:22:04.365031 2634178 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 12:22:07.524357 2634178 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-998345
	
	I1002 12:22:07.524460 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:07.543693 2634178 main.go:141] libmachine: Using SSH client type: native
	I1002 12:22:07.544118 2634178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36078 <nil> <nil>}
	I1002 12:22:07.544142 2634178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-998345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-998345/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-998345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:22:07.686335 2634178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:22:07.686364 2634178 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:22:07.686431 2634178 ubuntu.go:177] setting up certificates
	I1002 12:22:07.686442 2634178 provision.go:83] configureAuth start
	I1002 12:22:07.686533 2634178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-998345
	I1002 12:22:07.708689 2634178 provision.go:138] copyHostCerts
	I1002 12:22:07.708763 2634178 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:22:07.708788 2634178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:22:07.708867 2634178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:22:07.708972 2634178 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:22:07.708978 2634178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:22:07.709006 2634178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:22:07.709107 2634178 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:22:07.709112 2634178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:22:07.709137 2634178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:22:07.709181 2634178 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-998345 san=[192.168.70.202 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-998345]
	I1002 12:22:08.326345 2634178 provision.go:172] copyRemoteCerts
	I1002 12:22:08.326415 2634178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:22:08.326459 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:08.352813 2634178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36078 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/stopped-upgrade-998345/id_rsa Username:docker}
	I1002 12:22:08.459623 2634178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:22:08.484309 2634178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 12:22:08.509319 2634178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:22:08.536236 2634178 provision.go:86] duration metric: configureAuth took 849.759971ms
	I1002 12:22:08.536308 2634178 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:22:08.536551 2634178 config.go:182] Loaded profile config "stopped-upgrade-998345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:22:08.536665 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:08.556655 2634178 main.go:141] libmachine: Using SSH client type: native
	I1002 12:22:08.557087 2634178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36078 <nil> <nil>}
	I1002 12:22:08.557108 2634178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:22:09.010019 2634178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:22:09.010051 2634178 machine.go:91] provisioned docker machine in 4.671050474s
	I1002 12:22:09.010064 2634178 start.go:300] post-start starting for "stopped-upgrade-998345" (driver="docker")
	I1002 12:22:09.010076 2634178 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:22:09.010165 2634178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:22:09.010217 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:09.031581 2634178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36078 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/stopped-upgrade-998345/id_rsa Username:docker}
	I1002 12:22:09.139519 2634178 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:22:09.144222 2634178 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:22:09.144253 2634178 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:22:09.144265 2634178 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:22:09.144273 2634178 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 12:22:09.144284 2634178 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:22:09.144364 2634178 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:22:09.144451 2634178 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:22:09.144601 2634178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:22:09.155069 2634178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:22:09.180628 2634178 start.go:303] post-start completed in 170.546362ms
	I1002 12:22:09.180721 2634178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:22:09.180768 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:09.200790 2634178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36078 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/stopped-upgrade-998345/id_rsa Username:docker}
	I1002 12:22:09.298653 2634178 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:22:09.304578 2634178 fix.go:56] fixHost completed within 5.398579252s
	I1002 12:22:09.304654 2634178 start.go:83] releasing machines lock for "stopped-upgrade-998345", held for 5.398680117s
	I1002 12:22:09.304756 2634178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-998345
	I1002 12:22:09.323054 2634178 ssh_runner.go:195] Run: cat /version.json
	I1002 12:22:09.323118 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:09.323376 2634178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:22:09.323444 2634178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-998345
	I1002 12:22:09.343428 2634178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36078 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/stopped-upgrade-998345/id_rsa Username:docker}
	I1002 12:22:09.354071 2634178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36078 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/stopped-upgrade-998345/id_rsa Username:docker}
	W1002 12:22:09.510765 2634178 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 12:22:09.510884 2634178 ssh_runner.go:195] Run: systemctl --version
	I1002 12:22:09.516809 2634178 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:22:09.717361 2634178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:22:09.723463 2634178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:22:09.743102 2634178 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:22:09.743181 2634178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:22:09.777231 2634178 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 12:22:09.777257 2634178 start.go:469] detecting cgroup driver to use...
	I1002 12:22:09.777288 2634178 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:22:09.777338 2634178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:22:09.813547 2634178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:22:09.826098 2634178 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:22:09.826184 2634178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:22:09.838024 2634178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:22:09.850335 2634178 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 12:22:09.863745 2634178 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 12:22:09.863826 2634178 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:22:09.976733 2634178 docker.go:213] disabling docker service ...
	I1002 12:22:09.976809 2634178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:22:09.991657 2634178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:22:10.009180 2634178 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:22:10.125463 2634178 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:22:10.253432 2634178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:22:10.267417 2634178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:22:10.286651 2634178 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 12:22:10.286797 2634178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:22:10.300975 2634178 out.go:177] 
	W1002 12:22:10.302809 2634178 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 12:22:10.302832 2634178 out.go:239] * 
	* 
	W1002 12:22:10.303807 2634178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 12:22:10.305381 2634178 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-998345 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (71.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-668509 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-668509 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.703452534s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-668509] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-668509 in cluster pause-668509
	* Pulling base image ...
	* Updating the running docker "pause-668509" container ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-668509" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:23:32.180898 2639977 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:23:32.181099 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181110 2639977 out.go:309] Setting ErrFile to fd 2...
	I1002 12:23:32.181116 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181420 2639977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:23:32.181809 2639977 out.go:303] Setting JSON to false
	I1002 12:23:32.182933 2639977 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72358,"bootTime":1696177054,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:23:32.183010 2639977 start.go:138] virtualization:  
	I1002 12:23:32.185610 2639977 out.go:177] * [pause-668509] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:23:32.187806 2639977 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:23:32.187981 2639977 notify.go:220] Checking for updates...
	I1002 12:23:32.191537 2639977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:23:32.193419 2639977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:23:32.195402 2639977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:23:32.197413 2639977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:23:32.199344 2639977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:23:32.201692 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:32.202419 2639977 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:23:32.228165 2639977 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:23:32.228272 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.322273 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.309998554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.322378 2639977 docker.go:294] overlay module found
	I1002 12:23:32.324585 2639977 out.go:177] * Using the docker driver based on existing profile
	I1002 12:23:32.326260 2639977 start.go:298] selected driver: docker
	I1002 12:23:32.326279 2639977 start.go:902] validating driver "docker" against &{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.326457 2639977 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:23:32.326572 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.396630 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.386624913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.397102 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:32.397121 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:32.397135 2639977 start_flags.go:321] config:
	{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-p
rovisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.400667 2639977 out.go:177] * Starting control plane node pause-668509 in cluster pause-668509
	I1002 12:23:32.402595 2639977 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:23:32.404192 2639977 out.go:177] * Pulling base image ...
	I1002 12:23:32.405948 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:32.406008 2639977 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:23:32.406021 2639977 cache.go:57] Caching tarball of preloaded images
	I1002 12:23:32.406062 2639977 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 12:23:32.406119 2639977 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 12:23:32.406130 2639977 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:23:32.406255 2639977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/config.json ...
	I1002 12:23:32.425684 2639977 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 12:23:32.425711 2639977 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 12:23:32.425732 2639977 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:23:32.425774 2639977 start.go:365] acquiring machines lock for pause-668509: {Name:mka7b1d7db88c46f55df5c1454a55c5ef9dda60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:23:32.425851 2639977 start.go:369] acquired machines lock for "pause-668509" in 53.022µs
	I1002 12:23:32.425875 2639977 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:23:32.425884 2639977 fix.go:54] fixHost starting: 
	I1002 12:23:32.426171 2639977 cli_runner.go:164] Run: docker container inspect pause-668509 --format={{.State.Status}}
	I1002 12:23:32.450877 2639977 fix.go:102] recreateIfNeeded on pause-668509: state=Running err=<nil>
	W1002 12:23:32.450923 2639977 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:23:32.453218 2639977 out.go:177] * Updating the running docker "pause-668509" container ...
	I1002 12:23:32.455334 2639977 machine.go:88] provisioning docker machine ...
	I1002 12:23:32.455363 2639977 ubuntu.go:169] provisioning hostname "pause-668509"
	I1002 12:23:32.455436 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.478449 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.478889 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.478953 2639977 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-668509 && echo "pause-668509" | sudo tee /etc/hostname
	I1002 12:23:32.641530 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-668509
	
	I1002 12:23:32.641616 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.665677 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.666204 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.666229 2639977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-668509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-668509/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-668509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:23:32.810431 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:23:32.810504 2639977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:23:32.810537 2639977 ubuntu.go:177] setting up certificates
	I1002 12:23:32.810548 2639977 provision.go:83] configureAuth start
	I1002 12:23:32.810611 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:32.829905 2639977 provision.go:138] copyHostCerts
	I1002 12:23:32.829973 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:23:32.829999 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:23:32.830080 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:23:32.830186 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:23:32.830196 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:23:32.830230 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:23:32.830289 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:23:32.830298 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:23:32.830324 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:23:32.830373 2639977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.pause-668509 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-668509]
	I1002 12:23:33.230338 2639977 provision.go:172] copyRemoteCerts
	I1002 12:23:33.230411 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:23:33.230460 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.251210 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:33.357593 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:23:33.397868 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 12:23:33.435255 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:23:33.473226 2639977 provision.go:86] duration metric: configureAuth took 662.649093ms
	I1002 12:23:33.473252 2639977 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:23:33.473481 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:33.473590 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.495137 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:33.495661 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:33.495682 2639977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:23:39.010337 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:23:39.010363 2639977 machine.go:91] provisioned docker machine in 6.555011052s
	I1002 12:23:39.010374 2639977 start.go:300] post-start starting for "pause-668509" (driver="docker")
	I1002 12:23:39.010385 2639977 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:23:39.010452 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:23:39.010503 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.035391 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.140185 2639977 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:23:39.144688 2639977 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:23:39.144735 2639977 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:23:39.144748 2639977 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:23:39.144757 2639977 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 12:23:39.144771 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:23:39.144834 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:23:39.144935 2639977 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:23:39.145074 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:23:39.156323 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:39.186179 2639977 start.go:303] post-start completed in 175.789003ms
	I1002 12:23:39.186309 2639977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:23:39.186365 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.204618 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.299099 2639977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:23:39.305133 2639977 fix.go:56] fixHost completed within 6.879239839s
	I1002 12:23:39.305159 2639977 start.go:83] releasing machines lock for "pause-668509", held for 6.879296479s
	I1002 12:23:39.305230 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:39.322958 2639977 ssh_runner.go:195] Run: cat /version.json
	I1002 12:23:39.323032 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.322960 2639977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:23:39.323147 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.346323 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.359856 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.441548 2639977 ssh_runner.go:195] Run: systemctl --version
	I1002 12:23:39.904546 2639977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:23:40.103004 2639977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:23:40.120589 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.144505 2639977 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:23:40.144676 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.171857 2639977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 12:23:40.171883 2639977 start.go:469] detecting cgroup driver to use...
	I1002 12:23:40.171916 2639977 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:23:40.171984 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:23:40.199540 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:23:40.221367 2639977 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:23:40.221439 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:23:40.252086 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:23:40.276325 2639977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:23:40.525514 2639977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:23:40.701768 2639977 docker.go:213] disabling docker service ...
	I1002 12:23:40.701888 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:23:40.732331 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:23:40.766157 2639977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:23:40.975682 2639977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:23:41.184573 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:23:41.218834 2639977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:23:41.279530 2639977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:23:41.279598 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.311535 2639977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:23:41.311607 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.347074 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.381854 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.413617 2639977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:23:41.438844 2639977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:23:41.463431 2639977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:23:41.491059 2639977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:23:41.776070 2639977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:23:50.774018 2639977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.997837274s)
	I1002 12:23:50.774046 2639977 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:23:50.774100 2639977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:23:50.778940 2639977 start.go:537] Will wait 60s for crictl version
	I1002 12:23:50.779006 2639977 ssh_runner.go:195] Run: which crictl
	I1002 12:23:50.783491 2639977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:23:50.826301 2639977 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 12:23:50.826409 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.875711 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.926254 2639977 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 12:23:50.928370 2639977 cli_runner.go:164] Run: docker network inspect pause-668509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:23:50.946230 2639977 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 12:23:50.951586 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:50.951653 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.010184 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.010211 2639977 crio.go:415] Images already preloaded, skipping extraction
	I1002 12:23:51.010286 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.057820 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.057844 2639977 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:23:51.057930 2639977 ssh_runner.go:195] Run: crio config
	I1002 12:23:51.134323 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:51.134354 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:51.134375 2639977 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 12:23:51.134396 2639977 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-668509 NodeName:pause-668509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:23:51.134556 2639977 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-668509"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:23:51.134647 2639977 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-668509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:23:51.134723 2639977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:23:51.147206 2639977 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:23:51.147303 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:23:51.159147 2639977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1002 12:23:51.182957 2639977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:23:51.205939 2639977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1002 12:23:51.228986 2639977 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 12:23:51.234096 2639977 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509 for IP: 192.168.85.2
	I1002 12:23:51.234145 2639977 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:23:51.234300 2639977 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 12:23:51.234363 2639977 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 12:23:51.234455 2639977 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/client.key
	I1002 12:23:51.234521 2639977 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key.43b9df8c
	I1002 12:23:51.234574 2639977 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key
	I1002 12:23:51.234697 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 12:23:51.234734 2639977 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 12:23:51.234746 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:23:51.234782 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:23:51.234814 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:23:51.234845 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 12:23:51.234897 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:51.235621 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:23:51.266409 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 12:23:51.296669 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:23:51.326293 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 12:23:51.356979 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:23:51.387304 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 12:23:51.417020 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:23:51.447152 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 12:23:51.476983 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:23:51.506612 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 12:23:51.537204 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 12:23:51.568165 2639977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:23:51.591366 2639977 ssh_runner.go:195] Run: openssl version
	I1002 12:23:51.599159 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:23:51.611989 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617200 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617304 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.627433 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:23:51.639179 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 12:23:51.651858 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657111 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657181 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.666751 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 12:23:51.678931 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 12:23:51.691698 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697169 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697261 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.706360 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:23:51.718258 2639977 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:23:51.723423 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 12:23:51.732718 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 12:23:51.741811 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 12:23:51.750912 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 12:23:51.759989 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 12:23:51.769196 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 12:23:51.778365 2639977 kubeadm.go:404] StartCluster: {Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:51.778490 2639977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:23:51.778563 2639977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:23:51.822968 2639977 cri.go:89] found id: "1d65fa43be6526a0e8de55d55c972bf54b762e630bff8bedaf4344333e76d262"
	I1002 12:23:51.822990 2639977 cri.go:89] found id: "765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	I1002 12:23:51.822997 2639977 cri.go:89] found id: "91e1d9f2ab2dac9b1e8357f75a947028e40967ef9a5c6bb6e3ad20e171893d28"
	I1002 12:23:51.823002 2639977 cri.go:89] found id: "3f197576727f7b9ec7929a1095e70a78403ef2146c165ffdab079f0e2dede4ee"
	I1002 12:23:51.823006 2639977 cri.go:89] found id: "edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	I1002 12:23:51.823010 2639977 cri.go:89] found id: "882bfdefd067301b2b80b674d4032a06b2972666a29999d24088c9dd4625335c"
	I1002 12:23:51.823015 2639977 cri.go:89] found id: "88efc396e9b05e5dc78b0839250cb73e559f9650b0c57fcb5d74e358a54fbcb8"
	I1002 12:23:51.823019 2639977 cri.go:89] found id: "014046d912dfad6da106e30926717a8ed76ad72bbdf909d598d7147d8acf8c0f"
	I1002 12:23:51.823023 2639977 cri.go:89] found id: "b33d761035748a0a3eec9230dbb5e3e8620b6b53104f8fbacf34b58e861dbd33"
	I1002 12:23:51.823031 2639977 cri.go:89] found id: "b75a9c72a0a9399ee5479a16b4e1dddef04175cba6814d862319c964ff038b22"
	I1002 12:23:51.823036 2639977 cri.go:89] found id: "d4334935979b6cd4813af58cde21e855aa6c9ce043bcfdd7d9f17311e49aad4d"
	I1002 12:23:51.823045 2639977 cri.go:89] found id: "6322ef12b5a5599464089b8ecd8b3448ccced205d43a22100aa3af0cb08d14e3"
	I1002 12:23:51.823050 2639977 cri.go:89] found id: "4c6d76d4f88efe79aacaf3c5a2fd4b03815c33f7e45703ab532ad1644e4cc1c7"
	I1002 12:23:51.823061 2639977 cri.go:89] found id: "91de1cce327dd75b6230291957ffc035b5386862fc03bd5173a07d32a578c04e"
	I1002 12:23:51.823065 2639977 cri.go:89] found id: "72f51e2ba1ec80099b42b049d710229c7f389aac19be780df56f100493a02618"
	I1002 12:23:51.823073 2639977 cri.go:89] found id: "f2a2f01e6b3c2a7757ee29334ecd929f156f633596a50d91eefb73ae8b541fae"
	I1002 12:23:51.823079 2639977 cri.go:89] found id: ""
	I1002 12:23:51.823137 2639977 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-668509
helpers_test.go:235: (dbg) docker inspect pause-668509:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102",
	        "Created": "2023-10-02T12:22:18.975467541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2636109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T12:22:19.328175055Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/hosts",
	        "LogPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102-json.log",
	        "Name": "/pause-668509",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-668509:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-668509",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc-init/diff:/var/lib/docker/overlay2/1ffc828a09df1e9fa25f5092ba7b162a0fa5a6fe031a41b1f614792625eb1522/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-668509",
	                "Source": "/var/lib/docker/volumes/pause-668509/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-668509",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-668509",
	                "name.minikube.sigs.k8s.io": "pause-668509",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d5309f72edcb189f7ec52dcf9140ab2acf843fbc7e3aa7bfed679ec5a479188",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0d5309f72edc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-668509": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7ccd2f593640",
	                        "pause-668509"
	                    ],
	                    "NetworkID": "da1bf30239851d2e5aa2c8a99aa82d9a50d0f5350bb586b9bcc16cdfd0cead4b",
	                    "EndpointID": "a6c0451a6e5e992cc88f9a78bdfb26d7f2d4d758d9c812e2b2f8899752cb3b77",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-668509 -n pause-668509
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-668509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-668509 logs -n 25: (1.940266441s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-409989 sudo crio            | cilium-409989             | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-409989                      | cilium-409989             | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	| start   | -p force-systemd-env-193623           | force-systemd-env-193623  | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-402693             | missing-upgrade-402693    | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-193623           | force-systemd-env-193623  | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:16 UTC |
	| start   | -p force-systemd-flag-990972          | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-402693             | missing-upgrade-402693    | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:16 UTC |
	| start   | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-990972 ssh cat     | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-990972          | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	| start   | -p cert-options-926506                | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-926506 ssh               | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-926506 -- sudo        | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-926506                | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	| start   | -p running-upgrade-763919             | running-upgrade-763919    | jenkins | v1.31.2 | 02 Oct 23 12:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-763919             | running-upgrade-763919    | jenkins | v1.31.2 | 02 Oct 23 12:19 UTC | 02 Oct 23 12:19 UTC |
	| start   | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:19 UTC | 02 Oct 23 12:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	| start   | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	| start   | -p stopped-upgrade-998345             | stopped-upgrade-998345    | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-998345             | stopped-upgrade-998345    | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC | 02 Oct 23 12:22 UTC |
	| start   | -p pause-668509 --memory=2048         | pause-668509              | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC | 02 Oct 23 12:23 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-668509                       | pause-668509              | jenkins | v1.31.2 | 02 Oct 23 12:23 UTC | 02 Oct 23 12:24 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:23:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:23:32.180898 2639977 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:23:32.181099 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181110 2639977 out.go:309] Setting ErrFile to fd 2...
	I1002 12:23:32.181116 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181420 2639977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:23:32.181809 2639977 out.go:303] Setting JSON to false
	I1002 12:23:32.182933 2639977 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72358,"bootTime":1696177054,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:23:32.183010 2639977 start.go:138] virtualization:  
	I1002 12:23:32.185610 2639977 out.go:177] * [pause-668509] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:23:32.187806 2639977 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:23:32.187981 2639977 notify.go:220] Checking for updates...
	I1002 12:23:32.191537 2639977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:23:32.193419 2639977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:23:32.195402 2639977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:23:32.197413 2639977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:23:32.199344 2639977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:23:32.201692 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:32.202419 2639977 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:23:32.228165 2639977 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:23:32.228272 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.322273 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.309998554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.322378 2639977 docker.go:294] overlay module found
	I1002 12:23:32.324585 2639977 out.go:177] * Using the docker driver based on existing profile
	I1002 12:23:32.326260 2639977 start.go:298] selected driver: docker
	I1002 12:23:32.326279 2639977 start.go:902] validating driver "docker" against &{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.326457 2639977 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:23:32.326572 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.396630 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.386624913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.397102 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:32.397121 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:32.397135 2639977 start_flags.go:321] config:
	{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-p
rovisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.400667 2639977 out.go:177] * Starting control plane node pause-668509 in cluster pause-668509
	I1002 12:23:32.402595 2639977 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:23:32.404192 2639977 out.go:177] * Pulling base image ...
	I1002 12:23:32.405948 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:32.406008 2639977 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:23:32.406021 2639977 cache.go:57] Caching tarball of preloaded images
	I1002 12:23:32.406062 2639977 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 12:23:32.406119 2639977 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 12:23:32.406130 2639977 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:23:32.406255 2639977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/config.json ...
	I1002 12:23:32.425684 2639977 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 12:23:32.425711 2639977 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 12:23:32.425732 2639977 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:23:32.425774 2639977 start.go:365] acquiring machines lock for pause-668509: {Name:mka7b1d7db88c46f55df5c1454a55c5ef9dda60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:23:32.425851 2639977 start.go:369] acquired machines lock for "pause-668509" in 53.022µs
	I1002 12:23:32.425875 2639977 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:23:32.425884 2639977 fix.go:54] fixHost starting: 
	I1002 12:23:32.426171 2639977 cli_runner.go:164] Run: docker container inspect pause-668509 --format={{.State.Status}}
	I1002 12:23:32.450877 2639977 fix.go:102] recreateIfNeeded on pause-668509: state=Running err=<nil>
	W1002 12:23:32.450923 2639977 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:23:32.453218 2639977 out.go:177] * Updating the running docker "pause-668509" container ...
	I1002 12:23:32.455334 2639977 machine.go:88] provisioning docker machine ...
	I1002 12:23:32.455363 2639977 ubuntu.go:169] provisioning hostname "pause-668509"
	I1002 12:23:32.455436 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.478449 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.478889 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.478953 2639977 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-668509 && echo "pause-668509" | sudo tee /etc/hostname
	I1002 12:23:32.641530 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-668509
	
	I1002 12:23:32.641616 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.665677 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.666204 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.666229 2639977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-668509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-668509/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-668509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:23:32.810431 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:23:32.810504 2639977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:23:32.810537 2639977 ubuntu.go:177] setting up certificates
	I1002 12:23:32.810548 2639977 provision.go:83] configureAuth start
	I1002 12:23:32.810611 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:32.829905 2639977 provision.go:138] copyHostCerts
	I1002 12:23:32.829973 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:23:32.829999 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:23:32.830080 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:23:32.830186 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:23:32.830196 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:23:32.830230 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:23:32.830289 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:23:32.830298 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:23:32.830324 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:23:32.830373 2639977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.pause-668509 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-668509]
	I1002 12:23:33.230338 2639977 provision.go:172] copyRemoteCerts
	I1002 12:23:33.230411 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:23:33.230460 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.251210 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:33.357593 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:23:33.397868 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 12:23:33.435255 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:23:33.473226 2639977 provision.go:86] duration metric: configureAuth took 662.649093ms
	I1002 12:23:33.473252 2639977 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:23:33.473481 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:33.473590 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.495137 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:33.495661 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:33.495682 2639977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:23:32.963956 2626896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.086955347s)
	W1002 12:23:32.963994 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 12:23:32.964001 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:32.964011 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:33.072215 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:33.072297 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:33.198306 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:33.198383 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:33.259232 2626896 logs.go:123] Gathering logs for kube-apiserver [b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3] ...
	I1002 12:23:33.259309 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:33.316871 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:33.316951 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:33.382907 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:33.382940 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:35.961556 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:39.010337 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:23:39.010363 2639977 machine.go:91] provisioned docker machine in 6.555011052s
	I1002 12:23:39.010374 2639977 start.go:300] post-start starting for "pause-668509" (driver="docker")
	I1002 12:23:39.010385 2639977 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:23:39.010452 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:23:39.010503 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.035391 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.140185 2639977 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:23:39.144688 2639977 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:23:39.144735 2639977 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:23:39.144748 2639977 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:23:39.144757 2639977 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 12:23:39.144771 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:23:39.144834 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:23:39.144935 2639977 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:23:39.145074 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:23:39.156323 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:39.186179 2639977 start.go:303] post-start completed in 175.789003ms
	I1002 12:23:39.186309 2639977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:23:39.186365 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.204618 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.299099 2639977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:23:39.305133 2639977 fix.go:56] fixHost completed within 6.879239839s
	I1002 12:23:39.305159 2639977 start.go:83] releasing machines lock for "pause-668509", held for 6.879296479s
	I1002 12:23:39.305230 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:39.322958 2639977 ssh_runner.go:195] Run: cat /version.json
	I1002 12:23:39.323032 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.322960 2639977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:23:39.323147 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.346323 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.359856 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.441548 2639977 ssh_runner.go:195] Run: systemctl --version
	I1002 12:23:39.904546 2639977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:23:40.103004 2639977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:23:40.120589 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.144505 2639977 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:23:40.144676 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.171857 2639977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 12:23:40.171883 2639977 start.go:469] detecting cgroup driver to use...
	I1002 12:23:40.171916 2639977 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:23:40.171984 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:23:40.199540 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:23:40.221367 2639977 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:23:40.221439 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:23:40.252086 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:23:40.276325 2639977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:23:40.525514 2639977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:23:40.701768 2639977 docker.go:213] disabling docker service ...
	I1002 12:23:40.701888 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:23:40.732331 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:23:40.766157 2639977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:23:40.975682 2639977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:23:41.184573 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:23:41.218834 2639977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:23:41.279530 2639977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:23:41.279598 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.311535 2639977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:23:41.311607 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.347074 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.381854 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.413617 2639977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:23:41.438844 2639977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:23:41.463431 2639977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:23:41.491059 2639977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:23:41.776070 2639977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:23:37.718308 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": read tcp 192.168.67.1:56632->192.168.67.2:8443: read: connection reset by peer
	I1002 12:23:37.718365 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:37.718428 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:37.775282 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:37.775302 2626896 cri.go:89] found id: "b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:37.775308 2626896 cri.go:89] found id: ""
	I1002 12:23:37.775318 2626896 logs.go:284] 2 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3]
	I1002 12:23:37.775375 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.779859 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.784378 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:37.784445 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:37.825670 2626896 cri.go:89] found id: ""
	I1002 12:23:37.825692 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.825701 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:37.825707 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:37.825767 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:37.867919 2626896 cri.go:89] found id: ""
	I1002 12:23:37.867946 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.867955 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:37.867962 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:37.868019 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:37.910129 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:37.910150 2626896 cri.go:89] found id: ""
	I1002 12:23:37.910158 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:37.910215 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.914596 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:37.914669 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:37.956204 2626896 cri.go:89] found id: ""
	I1002 12:23:37.956225 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.956233 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:37.956240 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:37.956298 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:37.999136 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:37.999156 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:37.999162 2626896 cri.go:89] found id: ""
	I1002 12:23:37.999169 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:37.999228 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:38.007613 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:38.013024 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:38.013112 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:38.065684 2626896 cri.go:89] found id: ""
	I1002 12:23:38.065706 2626896 logs.go:284] 0 containers: []
	W1002 12:23:38.065720 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:38.065727 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:38.065790 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:38.108788 2626896 cri.go:89] found id: ""
	I1002 12:23:38.108809 2626896 logs.go:284] 0 containers: []
	W1002 12:23:38.108817 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:38.108830 2626896 logs.go:123] Gathering logs for kube-apiserver [b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3] ...
	I1002 12:23:38.108843 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:38.164272 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:38.164300 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:38.209116 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:38.209146 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:38.277431 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:38.277465 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:38.354210 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:38.354233 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:38.354245 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:38.378250 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:38.378285 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:38.426841 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:38.426920 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:38.530967 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:38.531007 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:38.575545 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:38.575575 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:38.624026 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:38.624054 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:41.248476 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:41.248879 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 12:23:41.248937 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:41.248995 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:41.325811 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:41.325830 2626896 cri.go:89] found id: ""
	I1002 12:23:41.325837 2626896 logs.go:284] 1 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b]
	I1002 12:23:41.325894 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.330389 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:41.330457 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:41.410662 2626896 cri.go:89] found id: ""
	I1002 12:23:41.410683 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.410692 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:41.410698 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:41.410759 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:41.487289 2626896 cri.go:89] found id: ""
	I1002 12:23:41.487309 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.487318 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:41.487324 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:41.487386 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:41.563843 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:41.563864 2626896 cri.go:89] found id: ""
	I1002 12:23:41.563872 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:41.563928 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.569448 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:41.569518 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:41.672472 2626896 cri.go:89] found id: ""
	I1002 12:23:41.672494 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.672504 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:41.672511 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:41.672586 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:41.732165 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:41.732184 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:41.732190 2626896 cri.go:89] found id: ""
	I1002 12:23:41.732197 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:41.732254 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.738052 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.743452 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:41.743599 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:41.809818 2626896 cri.go:89] found id: ""
	I1002 12:23:41.809845 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.809854 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:41.809862 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:41.809924 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:41.876577 2626896 cri.go:89] found id: ""
	I1002 12:23:41.876603 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.876612 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:41.876633 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:41.876653 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:41.933469 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:41.933497 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:42.041273 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:42.041308 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:42.093521 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:42.093554 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:42.147751 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:42.147793 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:42.278113 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:42.278153 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:42.302669 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:42.302705 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:42.383149 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:42.383173 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:42.383187 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:42.429161 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:42.429189 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:44.978059 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:44.978521 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 12:23:44.978565 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:44.978626 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:45.057772 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:45.057801 2626896 cri.go:89] found id: ""
	I1002 12:23:45.057810 2626896 logs.go:284] 1 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b]
	I1002 12:23:45.057923 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.079326 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:45.079408 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:45.142542 2626896 cri.go:89] found id: ""
	I1002 12:23:45.142573 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.142583 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:45.142590 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:45.142659 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:45.204129 2626896 cri.go:89] found id: ""
	I1002 12:23:45.204156 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.204166 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:45.204173 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:45.204244 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:45.270931 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:45.271006 2626896 cri.go:89] found id: ""
	I1002 12:23:45.271030 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:45.271132 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.277074 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:45.277170 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:45.332433 2626896 cri.go:89] found id: ""
	I1002 12:23:45.332469 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.332479 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:45.332487 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:45.332593 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:45.394838 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:45.394915 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:45.394930 2626896 cri.go:89] found id: ""
	I1002 12:23:45.394939 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:45.395008 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.400716 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.405797 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:45.405896 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:45.454446 2626896 cri.go:89] found id: ""
	I1002 12:23:45.454519 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.454541 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:45.454565 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:45.454656 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:45.503298 2626896 cri.go:89] found id: ""
	I1002 12:23:45.503374 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.503397 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:45.503418 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:45.503443 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:45.624512 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:45.624555 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:45.709482 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:45.709500 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:45.709512 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:45.784702 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:45.784729 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:45.850345 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:45.850387 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:45.877536 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:45.877572 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:45.979763 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:45.979801 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:46.027308 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:46.027340 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:46.075989 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:46.076016 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:50.774018 2639977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.997837274s)
	I1002 12:23:50.774046 2639977 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:23:50.774100 2639977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:23:50.778940 2639977 start.go:537] Will wait 60s for crictl version
	I1002 12:23:50.779006 2639977 ssh_runner.go:195] Run: which crictl
	I1002 12:23:50.783491 2639977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:23:50.826301 2639977 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 12:23:50.826409 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.875711 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.926254 2639977 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 12:23:50.928370 2639977 cli_runner.go:164] Run: docker network inspect pause-668509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:23:50.946230 2639977 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 12:23:50.951586 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:50.951653 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.010184 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.010211 2639977 crio.go:415] Images already preloaded, skipping extraction
	I1002 12:23:51.010286 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.057820 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.057844 2639977 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:23:51.057930 2639977 ssh_runner.go:195] Run: crio config
	I1002 12:23:51.134323 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:51.134354 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:51.134375 2639977 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 12:23:51.134396 2639977 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-668509 NodeName:pause-668509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:23:51.134556 2639977 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-668509"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:23:51.134647 2639977 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-668509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:23:51.134723 2639977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:23:51.147206 2639977 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:23:51.147303 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:23:51.159147 2639977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1002 12:23:51.182957 2639977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:23:51.205939 2639977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1002 12:23:51.228986 2639977 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 12:23:51.234096 2639977 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509 for IP: 192.168.85.2
	I1002 12:23:51.234145 2639977 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:23:51.234300 2639977 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 12:23:51.234363 2639977 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 12:23:51.234455 2639977 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/client.key
	I1002 12:23:51.234521 2639977 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key.43b9df8c
	I1002 12:23:51.234574 2639977 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key
	I1002 12:23:51.234697 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 12:23:51.234734 2639977 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 12:23:51.234746 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:23:51.234782 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:23:51.234814 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:23:51.234845 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 12:23:51.234897 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:51.235621 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:23:51.266409 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 12:23:51.296669 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:23:51.326293 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 12:23:51.356979 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:23:51.387304 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 12:23:51.417020 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:23:51.447152 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 12:23:51.476983 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:23:51.506612 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 12:23:51.537204 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 12:23:51.568165 2639977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:23:51.591366 2639977 ssh_runner.go:195] Run: openssl version
	I1002 12:23:51.599159 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:23:51.611989 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617200 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617304 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.627433 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:23:51.639179 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 12:23:51.651858 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657111 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657181 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.666751 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 12:23:51.678931 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 12:23:51.691698 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697169 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697261 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.706360 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:23:51.718258 2639977 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:23:51.723423 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 12:23:51.732718 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 12:23:51.741811 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 12:23:51.750912 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 12:23:51.759989 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 12:23:51.769196 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 12:23:51.778365 2639977 kubeadm.go:404] StartCluster: {Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:51.778490 2639977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:23:51.778563 2639977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:23:51.822968 2639977 cri.go:89] found id: "1d65fa43be6526a0e8de55d55c972bf54b762e630bff8bedaf4344333e76d262"
	I1002 12:23:51.822990 2639977 cri.go:89] found id: "765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	I1002 12:23:51.822997 2639977 cri.go:89] found id: "91e1d9f2ab2dac9b1e8357f75a947028e40967ef9a5c6bb6e3ad20e171893d28"
	I1002 12:23:51.823002 2639977 cri.go:89] found id: "3f197576727f7b9ec7929a1095e70a78403ef2146c165ffdab079f0e2dede4ee"
	I1002 12:23:51.823006 2639977 cri.go:89] found id: "edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	I1002 12:23:51.823010 2639977 cri.go:89] found id: "882bfdefd067301b2b80b674d4032a06b2972666a29999d24088c9dd4625335c"
	I1002 12:23:51.823015 2639977 cri.go:89] found id: "88efc396e9b05e5dc78b0839250cb73e559f9650b0c57fcb5d74e358a54fbcb8"
	I1002 12:23:51.823019 2639977 cri.go:89] found id: "014046d912dfad6da106e30926717a8ed76ad72bbdf909d598d7147d8acf8c0f"
	I1002 12:23:51.823023 2639977 cri.go:89] found id: "b33d761035748a0a3eec9230dbb5e3e8620b6b53104f8fbacf34b58e861dbd33"
	I1002 12:23:51.823031 2639977 cri.go:89] found id: "b75a9c72a0a9399ee5479a16b4e1dddef04175cba6814d862319c964ff038b22"
	I1002 12:23:51.823036 2639977 cri.go:89] found id: "d4334935979b6cd4813af58cde21e855aa6c9ce043bcfdd7d9f17311e49aad4d"
	I1002 12:23:51.823045 2639977 cri.go:89] found id: "6322ef12b5a5599464089b8ecd8b3448ccced205d43a22100aa3af0cb08d14e3"
	I1002 12:23:51.823050 2639977 cri.go:89] found id: "4c6d76d4f88efe79aacaf3c5a2fd4b03815c33f7e45703ab532ad1644e4cc1c7"
	I1002 12:23:51.823061 2639977 cri.go:89] found id: "91de1cce327dd75b6230291957ffc035b5386862fc03bd5173a07d32a578c04e"
	I1002 12:23:51.823065 2639977 cri.go:89] found id: "72f51e2ba1ec80099b42b049d710229c7f389aac19be780df56f100493a02618"
	I1002 12:23:51.823073 2639977 cri.go:89] found id: "f2a2f01e6b3c2a7757ee29334ecd929f156f633596a50d91eefb73ae8b541fae"
	I1002 12:23:51.823079 2639977 cri.go:89] found id: ""
	I1002 12:23:51.823137 2639977 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.022190570Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.040174591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.040219629Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.233387773Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=9e87b2d7-0f41-4c2f-9105-a3bbf29156ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.233615063Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9e87b2d7-0f41-4c2f-9105-a3bbf29156ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.235124063Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=3083962c-9fe0-45ae-80b4-5f77d868bf65 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.235372072Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3083962c-9fe0-45ae-80b4-5f77d868bf65 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.236297637Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-j7vsd/coredns" id=4cb41110-407a-4ce8-b76f-9a42eb1de389 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.236392251Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.251759273Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/82b6aa3179384bdb1515b36f3002e4e966dfcb843e4558a9a538fe7457059e59/merged/etc/passwd: no such file or directory"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.251959421Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/82b6aa3179384bdb1515b36f3002e4e966dfcb843e4558a9a538fe7457059e59/merged/etc/group: no such file or directory"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.323406501Z" level=info msg="Created container 7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81: kube-system/coredns-5dd5756b68-j7vsd/coredns" id=4cb41110-407a-4ce8-b76f-9a42eb1de389 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.323987269Z" level=info msg="Starting container: 7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81" id=6db68f8a-c055-4b44-a82d-3625c78e246c name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.336691763Z" level=info msg="Started container" PID=3512 containerID=7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81 description=kube-system/coredns-5dd5756b68-j7vsd/coredns id=6db68f8a-c055-4b44-a82d-3625c78e246c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e62eee13af75524043a806b0cbc26ad18f94d670442009152e75cbfea4fb5d22
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.232639543Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=4edf79cc-1cef-4245-9a82-aec5ade137f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.232868893Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4edf79cc-1cef-4245-9a82-aec5ade137f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.234119563Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=dbb19294-5082-488d-aa77-e0ed6b4b8545 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.234405840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=dbb19294-5082-488d-aa77-e0ed6b4b8545 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.236424298Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-zkmnf/coredns" id=c425fd8a-2874-4a6a-a990-233664096fa6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.236556286Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.251694663Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ee1b3e489842220926cad6721811e824019f5d765daa988c52e8232cfb1760f6/merged/etc/passwd: no such file or directory"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.251750614Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee1b3e489842220926cad6721811e824019f5d765daa988c52e8232cfb1760f6/merged/etc/group: no such file or directory"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.331699512Z" level=info msg="Created container 01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c: kube-system/coredns-5dd5756b68-zkmnf/coredns" id=c425fd8a-2874-4a6a-a990-233664096fa6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.334249154Z" level=info msg="Starting container: 01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c" id=efde65b9-3988-4b91-84be-3d50ce664d09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.349451073Z" level=info msg="Started container" PID=3562 containerID=01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c description=kube-system/coredns-5dd5756b68-zkmnf/coredns id=efde65b9-3988-4b91-84be-3d50ce664d09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=924858efb0edf7686af6798f9dacdda3bb9d6787731c7f30b9dcb0703999528b
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01cd06b397509       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 seconds ago       Running             coredns                   2                   924858efb0edf       coredns-5dd5756b68-zkmnf
	7d601eab89cf9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   6 seconds ago       Running             coredns                   2                   e62eee13af755       coredns-5dd5756b68-j7vsd
	1d10a59354950       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   23 seconds ago      Running             kindnet-cni               2                   f2c1c83b5c15f       kindnet-pkx85
	c8af7d0a9b1ef       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   23 seconds ago      Running             etcd                      2                   40fbe4cc2226b       etcd-pause-668509
	7a46fbd014b6e       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   23 seconds ago      Running             kube-proxy                2                   4d282c0c07d58       kube-proxy-54fsr
	0f1e64fd43e32       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   23 seconds ago      Running             kube-apiserver            2                   71f4895f9a35c       kube-apiserver-pause-668509
	36c2b8703fe1a       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   23 seconds ago      Running             kube-scheduler            2                   d5534b361a530       kube-scheduler-pause-668509
	1cc27dbdfb293       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   23 seconds ago      Running             kube-controller-manager   2                   5fc9ffdbb568d       kube-controller-manager-pause-668509
	1d65fa43be652       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   36 seconds ago      Exited              kube-apiserver            1                   71f4895f9a35c       kube-apiserver-pause-668509
	765f6029e0450       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   36 seconds ago      Exited              coredns                   1                   e62eee13af755       coredns-5dd5756b68-j7vsd
	91e1d9f2ab2da       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   36 seconds ago      Exited              kube-controller-manager   1                   5fc9ffdbb568d       kube-controller-manager-pause-668509
	3f197576727f7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   36 seconds ago      Exited              etcd                      1                   40fbe4cc2226b       etcd-pause-668509
	edc29e5627cfe       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   36 seconds ago      Exited              coredns                   1                   924858efb0edf       coredns-5dd5756b68-zkmnf
	882bfdefd0673       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   36 seconds ago      Exited              kindnet-cni               1                   f2c1c83b5c15f       kindnet-pkx85
	88efc396e9b05       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   36 seconds ago      Exited              kube-scheduler            1                   d5534b361a530       kube-scheduler-pause-668509
	014046d912dfa       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   36 seconds ago      Exited              kube-proxy                1                   4d282c0c07d58       kube-proxy-54fsr
	
	* 
	* ==> coredns [01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54643 - 56215 "HINFO IN 6260093662389047834.4517571499266444921. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022751645s
	
	* 
	* ==> coredns [765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44717 - 1084 "HINFO IN 289934871214255645.2867881275319821904. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014817508s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46305 - 59821 "HINFO IN 3123776545537870573.2831650581655121560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040399804s
	
	* 
	* ==> coredns [edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52016 - 42404 "HINFO IN 4303122002242223.5155889025548070122. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.042840992s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-668509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-668509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=pause-668509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T12_22_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:22:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-668509
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:24:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:23:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-668509
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cb933677f0e435dafb0b2eec666892b
	  System UUID:                5130b464-2e2d-4979-8db2-1a2be70a00b6
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j7vsd                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     80s
	  kube-system                 coredns-5dd5756b68-zkmnf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     80s
	  kube-system                 etcd-pause-668509                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-pkx85                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      80s
	  kube-system                 kube-apiserver-pause-668509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-pause-668509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-54fsr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-pause-668509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 78s                  kube-proxy       
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-668509 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node pause-668509 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-668509 status is now: NodeHasSufficientMemory
	  Normal   Starting                 92s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  92s                  kubelet          Node pause-668509 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s                  kubelet          Node pause-668509 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s                  kubelet          Node pause-668509 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                  node-controller  Node pause-668509 event: Registered Node pause-668509 in Controller
	  Normal   NodeReady                48s                  kubelet          Node pause-668509 status is now: NodeReady
	  Warning  ContainerGCFailed        32s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6s                   node-controller  Node pause-668509 event: Registered Node pause-668509 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001115] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001126] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +0.003366] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=000000b9 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000389fe983
	[  +0.001046] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=000000006a94d035
	[  +0.001082] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +2.104034] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=000000000fb754f7
	[  +0.001029] FS-Cache: O-key=[8] 'b2495c0100000000'
	[  +0.000775] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001095] FS-Cache: N-key=[8] 'b2495c0100000000'
	[  +0.359690] FS-Cache: Duplicate cookie detected
	[  +0.000696] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000bde6dc72
	[  +0.001094] FS-Cache: O-key=[8] 'b8495c0100000000'
	[  +0.000774] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000e33aca47
	[  +0.001035] FS-Cache: N-key=[8] 'b8495c0100000000'
	
	* 
	* ==> etcd [3f197576727f7b9ec7929a1095e70a78403ef2146c165ffdab079f0e2dede4ee] <==
	* {"level":"info","ts":"2023-10-02T12:23:40.292126Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"48.503667ms"}
	{"level":"info","ts":"2023-10-02T12:23:40.382142Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-10-02T12:23:40.449783Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","commit-index":457}
	{"level":"info","ts":"2023-10-02T12:23:40.449936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=()"}
	{"level":"info","ts":"2023-10-02T12:23:40.449967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became follower at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:40.449978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9f0758e1c58a86ed [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	{"level":"warn","ts":"2023-10-02T12:23:40.45286Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-10-02T12:23:40.478367Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":438}
	{"level":"info","ts":"2023-10-02T12:23:40.506244Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-10-02T12:23:40.511095Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"9f0758e1c58a86ed","timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:23:40.512195Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-10-02T12:23:40.512242Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"9f0758e1c58a86ed","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-10-02T12:23:40.516382Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-02T12:23:40.517029Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.517076Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.517085Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.52575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-10-02T12:23:40.525879Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-10-02T12:23:40.526025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:40.526063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:40.538138Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T12:23:40.538332Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T12:23:40.53836Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T12:23:40.538418Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:40.538431Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	
	* 
	* ==> etcd [c8af7d0a9b1efa58b8b7f99c10a1984c1ce9ecd2bd9d32fd9e835104c3c6cfdb] <==
	* {"level":"info","ts":"2023-10-02T12:23:53.305062Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:53.305072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:53.305716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-10-02T12:23:53.353867Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-10-02T12:23:53.354484Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:53.354532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:53.376556Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T12:23:53.376753Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T12:23:53.376775Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T12:23:53.376824Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:53.376831Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:54.376585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.376716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.376769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.37681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.384822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:23:54.385883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T12:23:54.386352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:23:54.387258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-10-02T12:23:54.384784Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-668509 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:23:54.392556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T12:23:54.392587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  12:24:16 up 20:06,  0 users,  load average: 2.55, 3.05, 2.60
	Linux pause-668509 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1d10a59354950d12d06709e6c2d72fb0b67b72881737b058aabf63b2352188bf] <==
	* I1002 12:23:53.251630       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 12:23:53.282868       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 12:23:53.283118       1 main.go:116] setting mtu 1500 for CNI 
	I1002 12:23:53.283163       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 12:23:53.283210       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 12:23:53.590199       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:53.590551       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:57.978376       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I1002 12:23:57.978483       1 main.go:227] handling current node
	I1002 12:24:08.001168       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I1002 12:24:08.001206       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [882bfdefd067301b2b80b674d4032a06b2972666a29999d24088c9dd4625335c] <==
	* I1002 12:23:40.016392       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 12:23:40.016980       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 12:23:40.018692       1 main.go:116] setting mtu 1500 for CNI 
	I1002 12:23:40.018733       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 12:23:40.018936       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 12:23:40.392852       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:40.393280       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [0f1e64fd43e3240c0c6aed3968cfbd67941c6eef246a45887ce5d1e22428ca39] <==
	* I1002 12:23:57.574440       1 naming_controller.go:291] Starting NamingConditionController
	I1002 12:23:57.574490       1 establishing_controller.go:76] Starting EstablishingController
	I1002 12:23:57.574535       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1002 12:23:57.574579       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1002 12:23:57.574625       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 12:23:57.574680       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1002 12:23:57.574712       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1002 12:23:57.904536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 12:23:57.905010       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 12:23:57.936617       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 12:23:57.936714       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 12:23:57.944665       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 12:23:57.946881       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 12:23:57.967013       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 12:23:57.973703       1 aggregator.go:166] initial CRD sync complete...
	I1002 12:23:57.973792       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 12:23:57.973840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 12:23:57.973872       1 cache.go:39] Caches are synced for autoregister controller
	I1002 12:23:57.987071       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 12:23:57.987573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1002 12:23:58.033646       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 12:23:58.558638       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 12:24:10.653544       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 12:24:10.676005       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 12:24:10.731563       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [1d65fa43be6526a0e8de55d55c972bf54b762e630bff8bedaf4344333e76d262] <==
	* I1002 12:23:40.420840       1 options.go:220] external host was not specified, using 192.168.85.2
	I1002 12:23:40.422030       1 server.go:148] Version: v1.28.2
	I1002 12:23:40.422060       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-controller-manager [1cc27dbdfb293caf335859ce8d4c161b6ad2a7d177bdc89a644e4897f1c77279] <==
	* I1002 12:24:10.697205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.252µs"
	I1002 12:24:10.698349       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1002 12:24:10.698405       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1002 12:24:10.699527       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1002 12:24:10.701211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.814µs"
	I1002 12:24:10.701923       1 shared_informer.go:318] Caches are synced for TTL
	I1002 12:24:10.707426       1 shared_informer.go:318] Caches are synced for namespace
	I1002 12:24:10.713328       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 12:24:10.732179       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 12:24:10.747045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.788057ms"
	I1002 12:24:10.747227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.733µs"
	I1002 12:24:10.766466       1 shared_informer.go:318] Caches are synced for disruption
	I1002 12:24:10.770051       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 12:24:10.788619       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 12:24:10.805062       1 shared_informer.go:318] Caches are synced for cronjob
	I1002 12:24:10.829709       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zkmnf"
	I1002 12:24:10.868003       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 12:24:10.879932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.841261ms"
	I1002 12:24:10.892881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.81738ms"
	I1002 12:24:10.893133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.782µs"
	I1002 12:24:11.205333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 12:24:11.234061       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 12:24:11.234099       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 12:24:12.698999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.703µs"
	I1002 12:24:12.711756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="162.29µs"
	
	* 
	* ==> kube-controller-manager [91e1d9f2ab2dac9b1e8357f75a947028e40967ef9a5c6bb6e3ad20e171893d28] <==
	* 
	* 
	* ==> kube-proxy [014046d912dfad6da106e30926717a8ed76ad72bbdf909d598d7147d8acf8c0f] <==
	* I1002 12:23:40.721495       1 server_others.go:69] "Using iptables proxy"
	
	* 
	* ==> kube-proxy [7a46fbd014b6ef171ac4b887b0bca912677d657447a64ffe1f89112122df2bc6] <==
	* I1002 12:23:57.912754       1 server_others.go:69] "Using iptables proxy"
	I1002 12:23:58.506662       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1002 12:23:58.914887       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 12:23:58.920783       1 server_others.go:152] "Using iptables Proxier"
	I1002 12:23:58.920887       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 12:23:58.920918       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 12:23:58.921035       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 12:23:58.921290       1 server.go:846] "Version info" version="v1.28.2"
	I1002 12:23:58.921515       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:23:58.922298       1 config.go:188] "Starting service config controller"
	I1002 12:23:58.923128       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 12:23:58.923224       1 config.go:97] "Starting endpoint slice config controller"
	I1002 12:23:58.923350       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 12:23:58.923893       1 config.go:315] "Starting node config controller"
	I1002 12:23:58.924418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 12:23:59.023841       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 12:23:59.023853       1 shared_informer.go:318] Caches are synced for service config
	I1002 12:23:59.025316       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [36c2b8703fe1ac719f1850010abebaf47f3b6108bb4ce8d3ea8f1a06c80a5b96] <==
	* I1002 12:23:56.517329       1 serving.go:348] Generated self-signed cert in-memory
	I1002 12:23:58.833129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 12:23:58.833726       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:23:58.843808       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 12:23:58.844021       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 12:23:58.844087       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 12:23:58.844143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 12:23:58.846513       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 12:23:58.846608       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 12:23:58.846651       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 12:23:58.846693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:23:58.945026       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 12:23:58.947475       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:23:58.947591       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [88efc396e9b05e5dc78b0839250cb73e559f9650b0c57fcb5d74e358a54fbcb8] <==
	* 
	* 
	* ==> kubelet <==
	* Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.610703    1395 status_manager.go:853] "Failed to get status for pod" podUID="7956f3f0-c4c6-4405-bde6-2c220f0595a7" pod="kube-system/kindnet-pkx85" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-pkx85\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.610923    1395 status_manager.go:853] "Failed to get status for pod" podUID="b0ec77ab-f124-423f-a7b1-a2a48efb563b" pod="kube-system/kube-proxy-54fsr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54fsr\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.612737    1395 status_manager.go:853] "Failed to get status for pod" podUID="7956f3f0-c4c6-4405-bde6-2c220f0595a7" pod="kube-system/kindnet-pkx85" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-pkx85\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.613075    1395 status_manager.go:853] "Failed to get status for pod" podUID="b0ec77ab-f124-423f-a7b1-a2a48efb563b" pod="kube-system/kube-proxy-54fsr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54fsr\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.615839    1395 status_manager.go:853] "Failed to get status for pod" podUID="6f2f859a-29d5-4aec-befb-42314c660c0a" pod="kube-system/coredns-5dd5756b68-j7vsd" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-j7vsd\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616194    1395 status_manager.go:853] "Failed to get status for pod" podUID="6ffeca67-4965-45fd-887d-779d8033e909" pod="kube-system/coredns-5dd5756b68-zkmnf" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zkmnf\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616465    1395 status_manager.go:853] "Failed to get status for pod" podUID="2013f852b0714c374746c377791e3c5f" pod="kube-system/kube-apiserver-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616778    1395 status_manager.go:853] "Failed to get status for pod" podUID="8c0eeb91a05b92cf83dbbd9d020af051" pod="kube-system/kube-controller-manager-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.617064    1395 status_manager.go:853] "Failed to get status for pod" podUID="55b3fa4b52a9f35848e6c33b248b5edd" pod="kube-system/kube-scheduler-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.617355    1395 status_manager.go:853] "Failed to get status for pod" podUID="a403b77af54159ac2719f26849a863d7" pod="kube-system/etcd-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.650898    1395 scope.go:117] "RemoveContainer" containerID="72f51e2ba1ec80099b42b049d710229c7f389aac19be780df56f100493a02618"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.814424    1395 scope.go:117] "RemoveContainer" containerID="f2a2f01e6b3c2a7757ee29334ecd929f156f633596a50d91eefb73ae8b541fae"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.894921    1395 scope.go:117] "RemoveContainer" containerID="d4334935979b6cd4813af58cde21e855aa6c9ce043bcfdd7d9f17311e49aad4d"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.984794    1395 scope.go:117] "RemoveContainer" containerID="6322ef12b5a5599464089b8ecd8b3448ccced205d43a22100aa3af0cb08d14e3"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.060987    1395 scope.go:117] "RemoveContainer" containerID="b33d761035748a0a3eec9230dbb5e3e8620b6b53104f8fbacf34b58e861dbd33"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: E1002 12:23:53.061859    1395 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-668509.178a49dafb3763e3", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-668509", UID:"2013f852b0714c374746c377791e3c5f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://192.168.85.2:8443/readyz\": dial tcp 192.168.85.2:8443: c
onnect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-668509"}, FirstTimestamp:time.Date(2023, time.October, 2, 12, 23, 34, 524822499, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 12, 23, 34, 524822499, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-668509"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.85.2:8443: connect: connection refused'(may retry after sleeping)
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.125916    1395 scope.go:117] "RemoveContainer" containerID="b75a9c72a0a9399ee5479a16b4e1dddef04175cba6814d862319c964ff038b22"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.177851    1395 scope.go:117] "RemoveContainer" containerID="4c6d76d4f88efe79aacaf3c5a2fd4b03815c33f7e45703ab532ad1644e4cc1c7"
	Oct 02 12:23:57 pause-668509 kubelet[1395]: E1002 12:23:57.594186    1395 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 02 12:23:58 pause-668509 kubelet[1395]: I1002 12:23:58.532327    1395 scope.go:117] "RemoveContainer" containerID="765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: E1002 12:23:58.533115    1395 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-j7vsd_kube-system(6f2f859a-29d5-4aec-befb-42314c660c0a)\"" pod="kube-system/coredns-5dd5756b68-j7vsd" podUID="6f2f859a-29d5-4aec-befb-42314c660c0a"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: I1002 12:23:58.540263    1395 scope.go:117] "RemoveContainer" containerID="edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: E1002 12:23:58.540848    1395 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-zkmnf_kube-system(6ffeca67-4965-45fd-887d-779d8033e909)\"" pod="kube-system/coredns-5dd5756b68-zkmnf" podUID="6ffeca67-4965-45fd-887d-779d8033e909"
	Oct 02 12:24:10 pause-668509 kubelet[1395]: I1002 12:24:10.232686    1395 scope.go:117] "RemoveContainer" containerID="765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	Oct 02 12:24:12 pause-668509 kubelet[1395]: I1002 12:24:12.231907    1395 scope.go:117] "RemoveContainer" containerID="edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:24:15.560565 2642994 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17340-2494243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-668509 -n pause-668509
helpers_test.go:261: (dbg) Run:  kubectl --context pause-668509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-668509
helpers_test.go:235: (dbg) docker inspect pause-668509:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102",
	        "Created": "2023-10-02T12:22:18.975467541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2636109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T12:22:19.328175055Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/hosts",
	        "LogPath": "/var/lib/docker/containers/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102/7ccd2f593640ba482201afff59752f23da31cecb69e8a23a8a81636dab99d102-json.log",
	        "Name": "/pause-668509",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-668509:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-668509",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc-init/diff:/var/lib/docker/overlay2/1ffc828a09df1e9fa25f5092ba7b162a0fa5a6fe031a41b1f614792625eb1522/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7bb663e3ecacbc5ac6bf695da439a070ff82d338da730b70d281c9f7b027d2bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-668509",
	                "Source": "/var/lib/docker/volumes/pause-668509/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-668509",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-668509",
	                "name.minikube.sigs.k8s.io": "pause-668509",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d5309f72edcb189f7ec52dcf9140ab2acf843fbc7e3aa7bfed679ec5a479188",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0d5309f72edc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-668509": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7ccd2f593640",
	                        "pause-668509"
	                    ],
	                    "NetworkID": "da1bf30239851d2e5aa2c8a99aa82d9a50d0f5350bb586b9bcc16cdfd0cead4b",
	                    "EndpointID": "a6c0451a6e5e992cc88f9a78bdfb26d7f2d4d758d9c812e2b2f8899752cb3b77",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-668509 -n pause-668509
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-668509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-668509 logs -n 25: (2.16801482s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-409989 sudo crio            | cilium-409989             | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-409989                      | cilium-409989             | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	| start   | -p force-systemd-env-193623           | force-systemd-env-193623  | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-402693             | missing-upgrade-402693    | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-193623           | force-systemd-env-193623  | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:16 UTC |
	| start   | -p force-systemd-flag-990972          | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-402693             | missing-upgrade-402693    | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:16 UTC |
	| start   | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:16 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-990972 ssh cat     | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-990972          | force-systemd-flag-990972 | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	| start   | -p cert-options-926506                | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-926506 ssh               | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-926506 -- sudo        | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-926506                | cert-options-926506       | jenkins | v1.31.2 | 02 Oct 23 12:17 UTC | 02 Oct 23 12:17 UTC |
	| start   | -p running-upgrade-763919             | running-upgrade-763919    | jenkins | v1.31.2 | 02 Oct 23 12:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-763919             | running-upgrade-763919    | jenkins | v1.31.2 | 02 Oct 23 12:19 UTC | 02 Oct 23 12:19 UTC |
	| start   | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:19 UTC | 02 Oct 23 12:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	| start   | -p kubernetes-upgrade-832241          | kubernetes-upgrade-832241 | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-752167             | cert-expiration-752167    | jenkins | v1.31.2 | 02 Oct 23 12:20 UTC | 02 Oct 23 12:20 UTC |
	| start   | -p stopped-upgrade-998345             | stopped-upgrade-998345    | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-998345             | stopped-upgrade-998345    | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC | 02 Oct 23 12:22 UTC |
	| start   | -p pause-668509 --memory=2048         | pause-668509              | jenkins | v1.31.2 | 02 Oct 23 12:22 UTC | 02 Oct 23 12:23 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-668509                       | pause-668509              | jenkins | v1.31.2 | 02 Oct 23 12:23 UTC | 02 Oct 23 12:24 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:23:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:23:32.180898 2639977 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:23:32.181099 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181110 2639977 out.go:309] Setting ErrFile to fd 2...
	I1002 12:23:32.181116 2639977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:23:32.181420 2639977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:23:32.181809 2639977 out.go:303] Setting JSON to false
	I1002 12:23:32.182933 2639977 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72358,"bootTime":1696177054,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:23:32.183010 2639977 start.go:138] virtualization:  
	I1002 12:23:32.185610 2639977 out.go:177] * [pause-668509] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:23:32.187806 2639977 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:23:32.187981 2639977 notify.go:220] Checking for updates...
	I1002 12:23:32.191537 2639977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:23:32.193419 2639977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:23:32.195402 2639977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:23:32.197413 2639977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:23:32.199344 2639977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:23:32.201692 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:32.202419 2639977 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:23:32.228165 2639977 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:23:32.228272 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.322273 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.309998554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.322378 2639977 docker.go:294] overlay module found
	I1002 12:23:32.324585 2639977 out.go:177] * Using the docker driver based on existing profile
	I1002 12:23:32.326260 2639977 start.go:298] selected driver: docker
	I1002 12:23:32.326279 2639977 start.go:902] validating driver "docker" against &{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.326457 2639977 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:23:32.326572 2639977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:23:32.396630 2639977 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:23:32.386624913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:23:32.397102 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:32.397121 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:32.397135 2639977 start_flags.go:321] config:
	{Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-p
rovisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:32.400667 2639977 out.go:177] * Starting control plane node pause-668509 in cluster pause-668509
	I1002 12:23:32.402595 2639977 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 12:23:32.404192 2639977 out.go:177] * Pulling base image ...
	I1002 12:23:32.405948 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:32.406008 2639977 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 12:23:32.406021 2639977 cache.go:57] Caching tarball of preloaded images
	I1002 12:23:32.406062 2639977 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 12:23:32.406119 2639977 preload.go:174] Found /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 12:23:32.406130 2639977 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:23:32.406255 2639977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/config.json ...
	I1002 12:23:32.425684 2639977 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 12:23:32.425711 2639977 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 12:23:32.425732 2639977 cache.go:195] Successfully downloaded all kic artifacts
	I1002 12:23:32.425774 2639977 start.go:365] acquiring machines lock for pause-668509: {Name:mka7b1d7db88c46f55df5c1454a55c5ef9dda60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:23:32.425851 2639977 start.go:369] acquired machines lock for "pause-668509" in 53.022µs
	I1002 12:23:32.425875 2639977 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:23:32.425884 2639977 fix.go:54] fixHost starting: 
	I1002 12:23:32.426171 2639977 cli_runner.go:164] Run: docker container inspect pause-668509 --format={{.State.Status}}
	I1002 12:23:32.450877 2639977 fix.go:102] recreateIfNeeded on pause-668509: state=Running err=<nil>
	W1002 12:23:32.450923 2639977 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:23:32.453218 2639977 out.go:177] * Updating the running docker "pause-668509" container ...
	I1002 12:23:32.455334 2639977 machine.go:88] provisioning docker machine ...
	I1002 12:23:32.455363 2639977 ubuntu.go:169] provisioning hostname "pause-668509"
	I1002 12:23:32.455436 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.478449 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.478889 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.478953 2639977 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-668509 && echo "pause-668509" | sudo tee /etc/hostname
	I1002 12:23:32.641530 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-668509
	
	I1002 12:23:32.641616 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:32.665677 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:32.666204 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:32.666229 2639977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-668509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-668509/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-668509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:23:32.810431 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:23:32.810504 2639977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2494243/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2494243/.minikube}
	I1002 12:23:32.810537 2639977 ubuntu.go:177] setting up certificates
	I1002 12:23:32.810548 2639977 provision.go:83] configureAuth start
	I1002 12:23:32.810611 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:32.829905 2639977 provision.go:138] copyHostCerts
	I1002 12:23:32.829973 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem, removing ...
	I1002 12:23:32.829999 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem
	I1002 12:23:32.830080 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.pem (1082 bytes)
	I1002 12:23:32.830186 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem, removing ...
	I1002 12:23:32.830196 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem
	I1002 12:23:32.830230 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/cert.pem (1123 bytes)
	I1002 12:23:32.830289 2639977 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem, removing ...
	I1002 12:23:32.830298 2639977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem
	I1002 12:23:32.830324 2639977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2494243/.minikube/key.pem (1675 bytes)
	I1002 12:23:32.830373 2639977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem org=jenkins.pause-668509 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-668509]
	I1002 12:23:33.230338 2639977 provision.go:172] copyRemoteCerts
	I1002 12:23:33.230411 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:23:33.230460 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.251210 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:33.357593 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:23:33.397868 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 12:23:33.435255 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:23:33.473226 2639977 provision.go:86] duration metric: configureAuth took 662.649093ms
	I1002 12:23:33.473252 2639977 ubuntu.go:193] setting minikube options for container-runtime
	I1002 12:23:33.473481 2639977 config.go:182] Loaded profile config "pause-668509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:23:33.473590 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:33.495137 2639977 main.go:141] libmachine: Using SSH client type: native
	I1002 12:23:33.495661 2639977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1002 12:23:33.495682 2639977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:23:32.963956 2626896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.086955347s)
	W1002 12:23:32.963994 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 12:23:32.964001 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:32.964011 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:33.072215 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:33.072297 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:33.198306 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:33.198383 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:33.259232 2626896 logs.go:123] Gathering logs for kube-apiserver [b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3] ...
	I1002 12:23:33.259309 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:33.316871 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:33.316951 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:33.382907 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:33.382940 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:35.961556 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:39.010337 2639977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:23:39.010363 2639977 machine.go:91] provisioned docker machine in 6.555011052s
	I1002 12:23:39.010374 2639977 start.go:300] post-start starting for "pause-668509" (driver="docker")
	I1002 12:23:39.010385 2639977 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:23:39.010452 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:23:39.010503 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.035391 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.140185 2639977 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:23:39.144688 2639977 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 12:23:39.144735 2639977 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 12:23:39.144748 2639977 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 12:23:39.144757 2639977 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 12:23:39.144771 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/addons for local assets ...
	I1002 12:23:39.144834 2639977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2494243/.minikube/files for local assets ...
	I1002 12:23:39.144935 2639977 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem -> 24995982.pem in /etc/ssl/certs
	I1002 12:23:39.145074 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:23:39.156323 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:39.186179 2639977 start.go:303] post-start completed in 175.789003ms
	I1002 12:23:39.186309 2639977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:23:39.186365 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.204618 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.299099 2639977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 12:23:39.305133 2639977 fix.go:56] fixHost completed within 6.879239839s
	I1002 12:23:39.305159 2639977 start.go:83] releasing machines lock for "pause-668509", held for 6.879296479s
	I1002 12:23:39.305230 2639977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-668509
	I1002 12:23:39.322958 2639977 ssh_runner.go:195] Run: cat /version.json
	I1002 12:23:39.323032 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.322960 2639977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:23:39.323147 2639977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-668509
	I1002 12:23:39.346323 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.359856 2639977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/pause-668509/id_rsa Username:docker}
	I1002 12:23:39.441548 2639977 ssh_runner.go:195] Run: systemctl --version
	I1002 12:23:39.904546 2639977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:23:40.103004 2639977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 12:23:40.120589 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.144505 2639977 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 12:23:40.144676 2639977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:23:40.171857 2639977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 12:23:40.171883 2639977 start.go:469] detecting cgroup driver to use...
	I1002 12:23:40.171916 2639977 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 12:23:40.171984 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:23:40.199540 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:23:40.221367 2639977 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:23:40.221439 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:23:40.252086 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:23:40.276325 2639977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:23:40.525514 2639977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:23:40.701768 2639977 docker.go:213] disabling docker service ...
	I1002 12:23:40.701888 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:23:40.732331 2639977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:23:40.766157 2639977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:23:40.975682 2639977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:23:41.184573 2639977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:23:41.218834 2639977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:23:41.279530 2639977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:23:41.279598 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.311535 2639977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:23:41.311607 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.347074 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.381854 2639977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:23:41.413617 2639977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:23:41.438844 2639977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:23:41.463431 2639977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:23:41.491059 2639977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:23:41.776070 2639977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:23:37.718308 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": read tcp 192.168.67.1:56632->192.168.67.2:8443: read: connection reset by peer
	I1002 12:23:37.718365 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:37.718428 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:37.775282 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:37.775302 2626896 cri.go:89] found id: "b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:37.775308 2626896 cri.go:89] found id: ""
	I1002 12:23:37.775318 2626896 logs.go:284] 2 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3]
	I1002 12:23:37.775375 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.779859 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.784378 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:37.784445 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:37.825670 2626896 cri.go:89] found id: ""
	I1002 12:23:37.825692 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.825701 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:37.825707 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:37.825767 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:37.867919 2626896 cri.go:89] found id: ""
	I1002 12:23:37.867946 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.867955 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:37.867962 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:37.868019 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:37.910129 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:37.910150 2626896 cri.go:89] found id: ""
	I1002 12:23:37.910158 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:37.910215 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:37.914596 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:37.914669 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:37.956204 2626896 cri.go:89] found id: ""
	I1002 12:23:37.956225 2626896 logs.go:284] 0 containers: []
	W1002 12:23:37.956233 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:37.956240 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:37.956298 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:37.999136 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:37.999156 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:37.999162 2626896 cri.go:89] found id: ""
	I1002 12:23:37.999169 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:37.999228 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:38.007613 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:38.013024 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:38.013112 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:38.065684 2626896 cri.go:89] found id: ""
	I1002 12:23:38.065706 2626896 logs.go:284] 0 containers: []
	W1002 12:23:38.065720 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:38.065727 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:38.065790 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:38.108788 2626896 cri.go:89] found id: ""
	I1002 12:23:38.108809 2626896 logs.go:284] 0 containers: []
	W1002 12:23:38.108817 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:38.108830 2626896 logs.go:123] Gathering logs for kube-apiserver [b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3] ...
	I1002 12:23:38.108843 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b29d9a1de92bd8df7c6dad49de7fc6afa0264014c0a9b14ad20e14dd6528e6d3"
	I1002 12:23:38.164272 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:38.164300 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:38.209116 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:38.209146 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:38.277431 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:38.277465 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:38.354210 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:38.354233 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:38.354245 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:38.378250 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:38.378285 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:38.426841 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:38.426920 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:38.530967 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:38.531007 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:38.575545 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:38.575575 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:38.624026 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:38.624054 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:41.248476 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:41.248879 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 12:23:41.248937 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:41.248995 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:41.325811 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:41.325830 2626896 cri.go:89] found id: ""
	I1002 12:23:41.325837 2626896 logs.go:284] 1 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b]
	I1002 12:23:41.325894 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.330389 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:41.330457 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:41.410662 2626896 cri.go:89] found id: ""
	I1002 12:23:41.410683 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.410692 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:41.410698 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:41.410759 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:41.487289 2626896 cri.go:89] found id: ""
	I1002 12:23:41.487309 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.487318 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:41.487324 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:41.487386 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:41.563843 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:41.563864 2626896 cri.go:89] found id: ""
	I1002 12:23:41.563872 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:41.563928 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.569448 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:41.569518 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:41.672472 2626896 cri.go:89] found id: ""
	I1002 12:23:41.672494 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.672504 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:41.672511 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:41.672586 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:41.732165 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:41.732184 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:41.732190 2626896 cri.go:89] found id: ""
	I1002 12:23:41.732197 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:41.732254 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.738052 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:41.743452 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:41.743599 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:41.809818 2626896 cri.go:89] found id: ""
	I1002 12:23:41.809845 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.809854 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:41.809862 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:41.809924 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:41.876577 2626896 cri.go:89] found id: ""
	I1002 12:23:41.876603 2626896 logs.go:284] 0 containers: []
	W1002 12:23:41.876612 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:41.876633 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:41.876653 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:41.933469 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:41.933497 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:42.041273 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:42.041308 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:42.093521 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:42.093554 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:42.147751 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:42.147793 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:42.278113 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:42.278153 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:42.302669 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:42.302705 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:42.383149 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:42.383173 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:42.383187 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:42.429161 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:42.429189 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:44.978059 2626896 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 12:23:44.978521 2626896 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 12:23:44.978565 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 12:23:44.978626 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 12:23:45.057772 2626896 cri.go:89] found id: "e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:45.057801 2626896 cri.go:89] found id: ""
	I1002 12:23:45.057810 2626896 logs.go:284] 1 containers: [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b]
	I1002 12:23:45.057923 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.079326 2626896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 12:23:45.079408 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 12:23:45.142542 2626896 cri.go:89] found id: ""
	I1002 12:23:45.142573 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.142583 2626896 logs.go:286] No container was found matching "etcd"
	I1002 12:23:45.142590 2626896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 12:23:45.142659 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 12:23:45.204129 2626896 cri.go:89] found id: ""
	I1002 12:23:45.204156 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.204166 2626896 logs.go:286] No container was found matching "coredns"
	I1002 12:23:45.204173 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 12:23:45.204244 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 12:23:45.270931 2626896 cri.go:89] found id: "190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:45.271006 2626896 cri.go:89] found id: ""
	I1002 12:23:45.271030 2626896 logs.go:284] 1 containers: [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575]
	I1002 12:23:45.271132 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.277074 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 12:23:45.277170 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 12:23:45.332433 2626896 cri.go:89] found id: ""
	I1002 12:23:45.332469 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.332479 2626896 logs.go:286] No container was found matching "kube-proxy"
	I1002 12:23:45.332487 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 12:23:45.332593 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 12:23:45.394838 2626896 cri.go:89] found id: "e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:45.394915 2626896 cri.go:89] found id: "f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:45.394930 2626896 cri.go:89] found id: ""
	I1002 12:23:45.394939 2626896 logs.go:284] 2 containers: [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9]
	I1002 12:23:45.395008 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.400716 2626896 ssh_runner.go:195] Run: which crictl
	I1002 12:23:45.405797 2626896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 12:23:45.405896 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 12:23:45.454446 2626896 cri.go:89] found id: ""
	I1002 12:23:45.454519 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.454541 2626896 logs.go:286] No container was found matching "kindnet"
	I1002 12:23:45.454565 2626896 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 12:23:45.454656 2626896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 12:23:45.503298 2626896 cri.go:89] found id: ""
	I1002 12:23:45.503374 2626896 logs.go:284] 0 containers: []
	W1002 12:23:45.503397 2626896 logs.go:286] No container was found matching "storage-provisioner"
	I1002 12:23:45.503418 2626896 logs.go:123] Gathering logs for kubelet ...
	I1002 12:23:45.503443 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 12:23:45.624512 2626896 logs.go:123] Gathering logs for describe nodes ...
	I1002 12:23:45.624555 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 12:23:45.709482 2626896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 12:23:45.709500 2626896 logs.go:123] Gathering logs for kube-apiserver [e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b] ...
	I1002 12:23:45.709512 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7547ea18621a8e51949f5d23db77e7dda7b034958222dd8693dc7a53d24ac6b"
	I1002 12:23:45.784702 2626896 logs.go:123] Gathering logs for CRI-O ...
	I1002 12:23:45.784729 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 12:23:45.850345 2626896 logs.go:123] Gathering logs for dmesg ...
	I1002 12:23:45.850387 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 12:23:45.877536 2626896 logs.go:123] Gathering logs for kube-scheduler [190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575] ...
	I1002 12:23:45.877572 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 190a214fbd04dbd14bb0bf18b3701f5316e2c1e886ecc040fda2694810fc1575"
	I1002 12:23:45.979763 2626896 logs.go:123] Gathering logs for kube-controller-manager [e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd] ...
	I1002 12:23:45.979801 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e566561f7200b197c4f03c4a322fb4d8f5f0028b5e30e1a441f73d48e12301fd"
	I1002 12:23:46.027308 2626896 logs.go:123] Gathering logs for kube-controller-manager [f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9] ...
	I1002 12:23:46.027340 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d47bfdbf0492d44243fbb956f011a3a3cb89629cb38698a0517972d7b2bcf9"
	I1002 12:23:46.075989 2626896 logs.go:123] Gathering logs for container status ...
	I1002 12:23:46.076016 2626896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 12:23:50.774018 2639977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.997837274s)
	I1002 12:23:50.774046 2639977 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:23:50.774100 2639977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:23:50.778940 2639977 start.go:537] Will wait 60s for crictl version
	I1002 12:23:50.779006 2639977 ssh_runner.go:195] Run: which crictl
	I1002 12:23:50.783491 2639977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:23:50.826301 2639977 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 12:23:50.826409 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.875711 2639977 ssh_runner.go:195] Run: crio --version
	I1002 12:23:50.926254 2639977 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 12:23:50.928370 2639977 cli_runner.go:164] Run: docker network inspect pause-668509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 12:23:50.946230 2639977 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 12:23:50.951586 2639977 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:23:50.951653 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.010184 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.010211 2639977 crio.go:415] Images already preloaded, skipping extraction
	I1002 12:23:51.010286 2639977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:23:51.057820 2639977 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:23:51.057844 2639977 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:23:51.057930 2639977 ssh_runner.go:195] Run: crio config
	I1002 12:23:51.134323 2639977 cni.go:84] Creating CNI manager for ""
	I1002 12:23:51.134354 2639977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 12:23:51.134375 2639977 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 12:23:51.134396 2639977 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-668509 NodeName:pause-668509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:23:51.134556 2639977 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-668509"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:23:51.134647 2639977 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-668509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:23:51.134723 2639977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:23:51.147206 2639977 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:23:51.147303 2639977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:23:51.159147 2639977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1002 12:23:51.182957 2639977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:23:51.205939 2639977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1002 12:23:51.228986 2639977 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 12:23:51.234096 2639977 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509 for IP: 192.168.85.2
	I1002 12:23:51.234145 2639977 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e28f0a4c3849593f708b97426b4e4332dc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:23:51.234300 2639977 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key
	I1002 12:23:51.234363 2639977 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key
	I1002 12:23:51.234455 2639977 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/client.key
	I1002 12:23:51.234521 2639977 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key.43b9df8c
	I1002 12:23:51.234574 2639977 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key
	I1002 12:23:51.234697 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem (1338 bytes)
	W1002 12:23:51.234734 2639977 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598_empty.pem, impossibly tiny 0 bytes
	I1002 12:23:51.234746 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:23:51.234782 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:23:51.234814 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:23:51.234845 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/certs/key.pem (1675 bytes)
	I1002 12:23:51.234897 2639977 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem (1708 bytes)
	I1002 12:23:51.235621 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:23:51.266409 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 12:23:51.296669 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:23:51.326293 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/pause-668509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 12:23:51.356979 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:23:51.387304 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 12:23:51.417020 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:23:51.447152 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 12:23:51.476983 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:23:51.506612 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/certs/2499598.pem --> /usr/share/ca-certificates/2499598.pem (1338 bytes)
	I1002 12:23:51.537204 2639977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/ssl/certs/24995982.pem --> /usr/share/ca-certificates/24995982.pem (1708 bytes)
	I1002 12:23:51.568165 2639977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:23:51.591366 2639977 ssh_runner.go:195] Run: openssl version
	I1002 12:23:51.599159 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:23:51.611989 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617200 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.617304 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:23:51.627433 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:23:51.639179 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2499598.pem && ln -fs /usr/share/ca-certificates/2499598.pem /etc/ssl/certs/2499598.pem"
	I1002 12:23:51.651858 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657111 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 11:46 /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.657181 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2499598.pem
	I1002 12:23:51.666751 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2499598.pem /etc/ssl/certs/51391683.0"
	I1002 12:23:51.678931 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24995982.pem && ln -fs /usr/share/ca-certificates/24995982.pem /etc/ssl/certs/24995982.pem"
	I1002 12:23:51.691698 2639977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697169 2639977 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 11:46 /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.697261 2639977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24995982.pem
	I1002 12:23:51.706360 2639977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24995982.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:23:51.718258 2639977 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:23:51.723423 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 12:23:51.732718 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 12:23:51.741811 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 12:23:51.750912 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 12:23:51.759989 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 12:23:51.769196 2639977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 12:23:51.778365 2639977 kubeadm.go:404] StartCluster: {Name:pause-668509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-668509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:23:51.778490 2639977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:23:51.778563 2639977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:23:51.822968 2639977 cri.go:89] found id: "1d65fa43be6526a0e8de55d55c972bf54b762e630bff8bedaf4344333e76d262"
	I1002 12:23:51.822990 2639977 cri.go:89] found id: "765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	I1002 12:23:51.822997 2639977 cri.go:89] found id: "91e1d9f2ab2dac9b1e8357f75a947028e40967ef9a5c6bb6e3ad20e171893d28"
	I1002 12:23:51.823002 2639977 cri.go:89] found id: "3f197576727f7b9ec7929a1095e70a78403ef2146c165ffdab079f0e2dede4ee"
	I1002 12:23:51.823006 2639977 cri.go:89] found id: "edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	I1002 12:23:51.823010 2639977 cri.go:89] found id: "882bfdefd067301b2b80b674d4032a06b2972666a29999d24088c9dd4625335c"
	I1002 12:23:51.823015 2639977 cri.go:89] found id: "88efc396e9b05e5dc78b0839250cb73e559f9650b0c57fcb5d74e358a54fbcb8"
	I1002 12:23:51.823019 2639977 cri.go:89] found id: "014046d912dfad6da106e30926717a8ed76ad72bbdf909d598d7147d8acf8c0f"
	I1002 12:23:51.823023 2639977 cri.go:89] found id: "b33d761035748a0a3eec9230dbb5e3e8620b6b53104f8fbacf34b58e861dbd33"
	I1002 12:23:51.823031 2639977 cri.go:89] found id: "b75a9c72a0a9399ee5479a16b4e1dddef04175cba6814d862319c964ff038b22"
	I1002 12:23:51.823036 2639977 cri.go:89] found id: "d4334935979b6cd4813af58cde21e855aa6c9ce043bcfdd7d9f17311e49aad4d"
	I1002 12:23:51.823045 2639977 cri.go:89] found id: "6322ef12b5a5599464089b8ecd8b3448ccced205d43a22100aa3af0cb08d14e3"
	I1002 12:23:51.823050 2639977 cri.go:89] found id: "4c6d76d4f88efe79aacaf3c5a2fd4b03815c33f7e45703ab532ad1644e4cc1c7"
	I1002 12:23:51.823061 2639977 cri.go:89] found id: "91de1cce327dd75b6230291957ffc035b5386862fc03bd5173a07d32a578c04e"
	I1002 12:23:51.823065 2639977 cri.go:89] found id: "72f51e2ba1ec80099b42b049d710229c7f389aac19be780df56f100493a02618"
	I1002 12:23:51.823073 2639977 cri.go:89] found id: "f2a2f01e6b3c2a7757ee29334ecd929f156f633596a50d91eefb73ae8b541fae"
	I1002 12:23:51.823079 2639977 cri.go:89] found id: ""
	I1002 12:23:51.823137 2639977 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.022190570Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.040174591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 12:23:58 pause-668509 crio[2716]: time="2023-10-02 12:23:58.040219629Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.233387773Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=9e87b2d7-0f41-4c2f-9105-a3bbf29156ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.233615063Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9e87b2d7-0f41-4c2f-9105-a3bbf29156ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.235124063Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=3083962c-9fe0-45ae-80b4-5f77d868bf65 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.235372072Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3083962c-9fe0-45ae-80b4-5f77d868bf65 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.236297637Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-j7vsd/coredns" id=4cb41110-407a-4ce8-b76f-9a42eb1de389 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.236392251Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.251759273Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/82b6aa3179384bdb1515b36f3002e4e966dfcb843e4558a9a538fe7457059e59/merged/etc/passwd: no such file or directory"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.251959421Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/82b6aa3179384bdb1515b36f3002e4e966dfcb843e4558a9a538fe7457059e59/merged/etc/group: no such file or directory"
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.323406501Z" level=info msg="Created container 7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81: kube-system/coredns-5dd5756b68-j7vsd/coredns" id=4cb41110-407a-4ce8-b76f-9a42eb1de389 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.323987269Z" level=info msg="Starting container: 7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81" id=6db68f8a-c055-4b44-a82d-3625c78e246c name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:24:10 pause-668509 crio[2716]: time="2023-10-02 12:24:10.336691763Z" level=info msg="Started container" PID=3512 containerID=7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81 description=kube-system/coredns-5dd5756b68-j7vsd/coredns id=6db68f8a-c055-4b44-a82d-3625c78e246c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e62eee13af75524043a806b0cbc26ad18f94d670442009152e75cbfea4fb5d22
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.232639543Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=4edf79cc-1cef-4245-9a82-aec5ade137f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.232868893Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4edf79cc-1cef-4245-9a82-aec5ade137f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.234119563Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=dbb19294-5082-488d-aa77-e0ed6b4b8545 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.234405840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=dbb19294-5082-488d-aa77-e0ed6b4b8545 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.236424298Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-zkmnf/coredns" id=c425fd8a-2874-4a6a-a990-233664096fa6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.236556286Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.251694663Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ee1b3e489842220926cad6721811e824019f5d765daa988c52e8232cfb1760f6/merged/etc/passwd: no such file or directory"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.251750614Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee1b3e489842220926cad6721811e824019f5d765daa988c52e8232cfb1760f6/merged/etc/group: no such file or directory"
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.331699512Z" level=info msg="Created container 01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c: kube-system/coredns-5dd5756b68-zkmnf/coredns" id=c425fd8a-2874-4a6a-a990-233664096fa6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.334249154Z" level=info msg="Starting container: 01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c" id=efde65b9-3988-4b91-84be-3d50ce664d09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 12:24:12 pause-668509 crio[2716]: time="2023-10-02 12:24:12.349451073Z" level=info msg="Started container" PID=3562 containerID=01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c description=kube-system/coredns-5dd5756b68-zkmnf/coredns id=efde65b9-3988-4b91-84be-3d50ce664d09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=924858efb0edf7686af6798f9dacdda3bb9d6787731c7f30b9dcb0703999528b
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01cd06b397509       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   7 seconds ago       Running             coredns                   2                   924858efb0edf       coredns-5dd5756b68-zkmnf
	7d601eab89cf9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   9 seconds ago       Running             coredns                   2                   e62eee13af755       coredns-5dd5756b68-j7vsd
	1d10a59354950       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   26 seconds ago      Running             kindnet-cni               2                   f2c1c83b5c15f       kindnet-pkx85
	c8af7d0a9b1ef       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   26 seconds ago      Running             etcd                      2                   40fbe4cc2226b       etcd-pause-668509
	7a46fbd014b6e       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   26 seconds ago      Running             kube-proxy                2                   4d282c0c07d58       kube-proxy-54fsr
	0f1e64fd43e32       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   26 seconds ago      Running             kube-apiserver            2                   71f4895f9a35c       kube-apiserver-pause-668509
	36c2b8703fe1a       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   26 seconds ago      Running             kube-scheduler            2                   d5534b361a530       kube-scheduler-pause-668509
	1cc27dbdfb293       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   26 seconds ago      Running             kube-controller-manager   2                   5fc9ffdbb568d       kube-controller-manager-pause-668509
	1d65fa43be652       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   39 seconds ago      Exited              kube-apiserver            1                   71f4895f9a35c       kube-apiserver-pause-668509
	765f6029e0450       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   39 seconds ago      Exited              coredns                   1                   e62eee13af755       coredns-5dd5756b68-j7vsd
	91e1d9f2ab2da       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   39 seconds ago      Exited              kube-controller-manager   1                   5fc9ffdbb568d       kube-controller-manager-pause-668509
	3f197576727f7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   39 seconds ago      Exited              etcd                      1                   40fbe4cc2226b       etcd-pause-668509
	edc29e5627cfe       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   39 seconds ago      Exited              coredns                   1                   924858efb0edf       coredns-5dd5756b68-zkmnf
	882bfdefd0673       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   39 seconds ago      Exited              kindnet-cni               1                   f2c1c83b5c15f       kindnet-pkx85
	88efc396e9b05       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   39 seconds ago      Exited              kube-scheduler            1                   d5534b361a530       kube-scheduler-pause-668509
	014046d912dfa       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   39 seconds ago      Exited              kube-proxy                1                   4d282c0c07d58       kube-proxy-54fsr
	
	* 
	* ==> coredns [01cd06b39750919f7307ad071b36e012031b8b05c0c3f70b4b7ebd081a6f964c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54643 - 56215 "HINFO IN 6260093662389047834.4517571499266444921. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022751645s
	
	* 
	* ==> coredns [765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44717 - 1084 "HINFO IN 289934871214255645.2867881275319821904. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014817508s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7d601eab89cf9e11c8895f99a6b96a42e1df74715799bd8ff799b456abd3bc81] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46305 - 59821 "HINFO IN 3123776545537870573.2831650581655121560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040399804s
	
	* 
	* ==> coredns [edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52016 - 42404 "HINFO IN 4303122002242223.5155889025548070122. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.042840992s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-668509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-668509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=pause-668509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T12_22_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:22:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-668509
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:23:28 +0000   Mon, 02 Oct 2023 12:23:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-668509
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cb933677f0e435dafb0b2eec666892b
	  System UUID:                5130b464-2e2d-4979-8db2-1a2be70a00b6
	  Boot ID:                    67922263-14c1-496d-a009-5b9469adca8d
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j7vsd                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     83s
	  kube-system                 coredns-5dd5756b68-zkmnf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     83s
	  kube-system                 etcd-pause-668509                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kindnet-pkx85                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      83s
	  kube-system                 kube-apiserver-pause-668509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-pause-668509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-54fsr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-scheduler-pause-668509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 82s                  kube-proxy       
	  Normal   Starting                 20s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node pause-668509 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x8 over 104s)  kubelet          Node pause-668509 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node pause-668509 status is now: NodeHasSufficientMemory
	  Normal   Starting                 95s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  95s                  kubelet          Node pause-668509 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s                  kubelet          Node pause-668509 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s                  kubelet          Node pause-668509 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           84s                  node-controller  Node pause-668509 event: Registered Node pause-668509 in Controller
	  Normal   NodeReady                51s                  kubelet          Node pause-668509 status is now: NodeReady
	  Warning  ContainerGCFailed        35s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           9s                   node-controller  Node pause-668509 event: Registered Node pause-668509 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001115] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001126] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +0.003366] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=000000b9 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000389fe983
	[  +0.001046] FS-Cache: O-key=[8] 'b3495c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=000000006a94d035
	[  +0.001082] FS-Cache: N-key=[8] 'b3495c0100000000'
	[  +2.104034] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=000000000fb754f7
	[  +0.001029] FS-Cache: O-key=[8] 'b2495c0100000000'
	[  +0.000775] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000c4ccfeb8
	[  +0.001095] FS-Cache: N-key=[8] 'b2495c0100000000'
	[  +0.359690] FS-Cache: Duplicate cookie detected
	[  +0.000696] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b3d4d329{9p.inode} n=00000000bde6dc72
	[  +0.001094] FS-Cache: O-key=[8] 'b8495c0100000000'
	[  +0.000774] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000b3d4d329{9p.inode} n=00000000e33aca47
	[  +0.001035] FS-Cache: N-key=[8] 'b8495c0100000000'
	
	* 
	* ==> etcd [3f197576727f7b9ec7929a1095e70a78403ef2146c165ffdab079f0e2dede4ee] <==
	* {"level":"info","ts":"2023-10-02T12:23:40.292126Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"48.503667ms"}
	{"level":"info","ts":"2023-10-02T12:23:40.382142Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-10-02T12:23:40.449783Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","commit-index":457}
	{"level":"info","ts":"2023-10-02T12:23:40.449936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=()"}
	{"level":"info","ts":"2023-10-02T12:23:40.449967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became follower at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:40.449978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9f0758e1c58a86ed [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	{"level":"warn","ts":"2023-10-02T12:23:40.45286Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-10-02T12:23:40.478367Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":438}
	{"level":"info","ts":"2023-10-02T12:23:40.506244Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-10-02T12:23:40.511095Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"9f0758e1c58a86ed","timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:23:40.512195Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-10-02T12:23:40.512242Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"9f0758e1c58a86ed","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-10-02T12:23:40.516382Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-02T12:23:40.517029Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.517076Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.517085Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:40.52575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-10-02T12:23:40.525879Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-10-02T12:23:40.526025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:40.526063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:40.538138Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T12:23:40.538332Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T12:23:40.53836Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T12:23:40.538418Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:40.538431Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	
	* 
	* ==> etcd [c8af7d0a9b1efa58b8b7f99c10a1984c1ce9ecd2bd9d32fd9e835104c3c6cfdb] <==
	* {"level":"info","ts":"2023-10-02T12:23:53.305062Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:53.305072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:23:53.305716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-10-02T12:23:53.353867Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-10-02T12:23:53.354484Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:53.354532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:23:53.376556Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T12:23:53.376753Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T12:23:53.376775Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T12:23:53.376824Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:53.376831Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-02T12:23:54.376585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.376716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.376769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-10-02T12:23:54.37681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.376916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-10-02T12:23:54.384822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:23:54.385883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T12:23:54.386352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:23:54.387258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-10-02T12:23:54.384784Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-668509 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:23:54.392556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T12:23:54.392587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  12:24:19 up 20:06,  0 users,  load average: 2.51, 3.03, 2.60
	Linux pause-668509 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1d10a59354950d12d06709e6c2d72fb0b67b72881737b058aabf63b2352188bf] <==
	* I1002 12:23:53.251630       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 12:23:53.282868       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 12:23:53.283118       1 main.go:116] setting mtu 1500 for CNI 
	I1002 12:23:53.283163       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 12:23:53.283210       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 12:23:53.590199       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:53.590551       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:57.978376       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I1002 12:23:57.978483       1 main.go:227] handling current node
	I1002 12:24:08.001168       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I1002 12:24:08.001206       1 main.go:227] handling current node
	I1002 12:24:18.013937       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I1002 12:24:18.013967       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [882bfdefd067301b2b80b674d4032a06b2972666a29999d24088c9dd4625335c] <==
	* I1002 12:23:40.016392       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 12:23:40.016980       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 12:23:40.018692       1 main.go:116] setting mtu 1500 for CNI 
	I1002 12:23:40.018733       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 12:23:40.018936       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 12:23:40.392852       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 12:23:40.393280       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [0f1e64fd43e3240c0c6aed3968cfbd67941c6eef246a45887ce5d1e22428ca39] <==
	* I1002 12:23:57.574440       1 naming_controller.go:291] Starting NamingConditionController
	I1002 12:23:57.574490       1 establishing_controller.go:76] Starting EstablishingController
	I1002 12:23:57.574535       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1002 12:23:57.574579       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1002 12:23:57.574625       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 12:23:57.574680       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1002 12:23:57.574712       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1002 12:23:57.904536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 12:23:57.905010       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 12:23:57.936617       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 12:23:57.936714       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 12:23:57.944665       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 12:23:57.946881       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 12:23:57.967013       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 12:23:57.973703       1 aggregator.go:166] initial CRD sync complete...
	I1002 12:23:57.973792       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 12:23:57.973840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 12:23:57.973872       1 cache.go:39] Caches are synced for autoregister controller
	I1002 12:23:57.987071       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 12:23:57.987573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1002 12:23:58.033646       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 12:23:58.558638       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 12:24:10.653544       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 12:24:10.676005       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 12:24:10.731563       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [1d65fa43be6526a0e8de55d55c972bf54b762e630bff8bedaf4344333e76d262] <==
	* I1002 12:23:40.420840       1 options.go:220] external host was not specified, using 192.168.85.2
	I1002 12:23:40.422030       1 server.go:148] Version: v1.28.2
	I1002 12:23:40.422060       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-controller-manager [1cc27dbdfb293caf335859ce8d4c161b6ad2a7d177bdc89a644e4897f1c77279] <==
	* I1002 12:24:10.697205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.252µs"
	I1002 12:24:10.698349       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1002 12:24:10.698405       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1002 12:24:10.699527       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1002 12:24:10.701211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.814µs"
	I1002 12:24:10.701923       1 shared_informer.go:318] Caches are synced for TTL
	I1002 12:24:10.707426       1 shared_informer.go:318] Caches are synced for namespace
	I1002 12:24:10.713328       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 12:24:10.732179       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 12:24:10.747045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.788057ms"
	I1002 12:24:10.747227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.733µs"
	I1002 12:24:10.766466       1 shared_informer.go:318] Caches are synced for disruption
	I1002 12:24:10.770051       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 12:24:10.788619       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 12:24:10.805062       1 shared_informer.go:318] Caches are synced for cronjob
	I1002 12:24:10.829709       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zkmnf"
	I1002 12:24:10.868003       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 12:24:10.879932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.841261ms"
	I1002 12:24:10.892881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.81738ms"
	I1002 12:24:10.893133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.782µs"
	I1002 12:24:11.205333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 12:24:11.234061       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 12:24:11.234099       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 12:24:12.698999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.703µs"
	I1002 12:24:12.711756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="162.29µs"
	
	* 
	* ==> kube-controller-manager [91e1d9f2ab2dac9b1e8357f75a947028e40967ef9a5c6bb6e3ad20e171893d28] <==
	* 
	* 
	* ==> kube-proxy [014046d912dfad6da106e30926717a8ed76ad72bbdf909d598d7147d8acf8c0f] <==
	* I1002 12:23:40.721495       1 server_others.go:69] "Using iptables proxy"
	
	* 
	* ==> kube-proxy [7a46fbd014b6ef171ac4b887b0bca912677d657447a64ffe1f89112122df2bc6] <==
	* I1002 12:23:57.912754       1 server_others.go:69] "Using iptables proxy"
	I1002 12:23:58.506662       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1002 12:23:58.914887       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 12:23:58.920783       1 server_others.go:152] "Using iptables Proxier"
	I1002 12:23:58.920887       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 12:23:58.920918       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 12:23:58.921035       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 12:23:58.921290       1 server.go:846] "Version info" version="v1.28.2"
	I1002 12:23:58.921515       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:23:58.922298       1 config.go:188] "Starting service config controller"
	I1002 12:23:58.923128       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 12:23:58.923224       1 config.go:97] "Starting endpoint slice config controller"
	I1002 12:23:58.923350       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 12:23:58.923893       1 config.go:315] "Starting node config controller"
	I1002 12:23:58.924418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 12:23:59.023841       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 12:23:59.023853       1 shared_informer.go:318] Caches are synced for service config
	I1002 12:23:59.025316       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [36c2b8703fe1ac719f1850010abebaf47f3b6108bb4ce8d3ea8f1a06c80a5b96] <==
	* I1002 12:23:56.517329       1 serving.go:348] Generated self-signed cert in-memory
	I1002 12:23:58.833129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 12:23:58.833726       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:23:58.843808       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 12:23:58.844021       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 12:23:58.844087       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 12:23:58.844143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 12:23:58.846513       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 12:23:58.846608       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 12:23:58.846651       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 12:23:58.846693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:23:58.945026       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 12:23:58.947475       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 12:23:58.947591       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [88efc396e9b05e5dc78b0839250cb73e559f9650b0c57fcb5d74e358a54fbcb8] <==
	* 
	* 
	* ==> kubelet <==
	* Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.610703    1395 status_manager.go:853] "Failed to get status for pod" podUID="7956f3f0-c4c6-4405-bde6-2c220f0595a7" pod="kube-system/kindnet-pkx85" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-pkx85\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.610923    1395 status_manager.go:853] "Failed to get status for pod" podUID="b0ec77ab-f124-423f-a7b1-a2a48efb563b" pod="kube-system/kube-proxy-54fsr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54fsr\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.612737    1395 status_manager.go:853] "Failed to get status for pod" podUID="7956f3f0-c4c6-4405-bde6-2c220f0595a7" pod="kube-system/kindnet-pkx85" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-pkx85\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.613075    1395 status_manager.go:853] "Failed to get status for pod" podUID="b0ec77ab-f124-423f-a7b1-a2a48efb563b" pod="kube-system/kube-proxy-54fsr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54fsr\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.615839    1395 status_manager.go:853] "Failed to get status for pod" podUID="6f2f859a-29d5-4aec-befb-42314c660c0a" pod="kube-system/coredns-5dd5756b68-j7vsd" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-j7vsd\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616194    1395 status_manager.go:853] "Failed to get status for pod" podUID="6ffeca67-4965-45fd-887d-779d8033e909" pod="kube-system/coredns-5dd5756b68-zkmnf" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zkmnf\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616465    1395 status_manager.go:853] "Failed to get status for pod" podUID="2013f852b0714c374746c377791e3c5f" pod="kube-system/kube-apiserver-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.616778    1395 status_manager.go:853] "Failed to get status for pod" podUID="8c0eeb91a05b92cf83dbbd9d020af051" pod="kube-system/kube-controller-manager-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.617064    1395 status_manager.go:853] "Failed to get status for pod" podUID="55b3fa4b52a9f35848e6c33b248b5edd" pod="kube-system/kube-scheduler-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.617355    1395 status_manager.go:853] "Failed to get status for pod" podUID="a403b77af54159ac2719f26849a863d7" pod="kube-system/etcd-pause-668509" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-668509\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.650898    1395 scope.go:117] "RemoveContainer" containerID="72f51e2ba1ec80099b42b049d710229c7f389aac19be780df56f100493a02618"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.814424    1395 scope.go:117] "RemoveContainer" containerID="f2a2f01e6b3c2a7757ee29334ecd929f156f633596a50d91eefb73ae8b541fae"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.894921    1395 scope.go:117] "RemoveContainer" containerID="d4334935979b6cd4813af58cde21e855aa6c9ce043bcfdd7d9f17311e49aad4d"
	Oct 02 12:23:52 pause-668509 kubelet[1395]: I1002 12:23:52.984794    1395 scope.go:117] "RemoveContainer" containerID="6322ef12b5a5599464089b8ecd8b3448ccced205d43a22100aa3af0cb08d14e3"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.060987    1395 scope.go:117] "RemoveContainer" containerID="b33d761035748a0a3eec9230dbb5e3e8620b6b53104f8fbacf34b58e861dbd33"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: E1002 12:23:53.061859    1395 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-668509.178a49dafb3763e3", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-668509", UID:"2013f852b0714c374746c377791e3c5f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://192.168.85.2:8443/readyz\": dial tcp 192.168.85.2:8443: c
onnect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-668509"}, FirstTimestamp:time.Date(2023, time.October, 2, 12, 23, 34, 524822499, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 12, 23, 34, 524822499, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-668509"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.85.2:8443: connect: connection refused'(may retry after sleeping)
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.125916    1395 scope.go:117] "RemoveContainer" containerID="b75a9c72a0a9399ee5479a16b4e1dddef04175cba6814d862319c964ff038b22"
	Oct 02 12:23:53 pause-668509 kubelet[1395]: I1002 12:23:53.177851    1395 scope.go:117] "RemoveContainer" containerID="4c6d76d4f88efe79aacaf3c5a2fd4b03815c33f7e45703ab532ad1644e4cc1c7"
	Oct 02 12:23:57 pause-668509 kubelet[1395]: E1002 12:23:57.594186    1395 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 02 12:23:58 pause-668509 kubelet[1395]: I1002 12:23:58.532327    1395 scope.go:117] "RemoveContainer" containerID="765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: E1002 12:23:58.533115    1395 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-j7vsd_kube-system(6f2f859a-29d5-4aec-befb-42314c660c0a)\"" pod="kube-system/coredns-5dd5756b68-j7vsd" podUID="6f2f859a-29d5-4aec-befb-42314c660c0a"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: I1002 12:23:58.540263    1395 scope.go:117] "RemoveContainer" containerID="edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	Oct 02 12:23:58 pause-668509 kubelet[1395]: E1002 12:23:58.540848    1395 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-zkmnf_kube-system(6ffeca67-4965-45fd-887d-779d8033e909)\"" pod="kube-system/coredns-5dd5756b68-zkmnf" podUID="6ffeca67-4965-45fd-887d-779d8033e909"
	Oct 02 12:24:10 pause-668509 kubelet[1395]: I1002 12:24:10.232686    1395 scope.go:117] "RemoveContainer" containerID="765f6029e04503733090301109f6ac9d0680f171a384f96f69f1869087493160"
	Oct 02 12:24:12 pause-668509 kubelet[1395]: I1002 12:24:12.231907    1395 scope.go:117] "RemoveContainer" containerID="edc29e5627cfe3eaa7d550461b0649db0a97205623edd25c9b3483d9aa1e5d53"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:24:18.566753 2643418 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17340-2494243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-668509 -n pause-668509
helpers_test.go:261: (dbg) Run:  kubectl --context pause-668509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (49.01s)

                                                
                                    

Test pass (263/299)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.38
4 TestDownloadOnly/v1.16.0/preload-exists 0.01
8 TestDownloadOnly/v1.16.0/LogsDuration 0.39
10 TestDownloadOnly/v1.28.2/json-events 11.84
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 13.39
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
22 TestAddons/Setup 138.36
24 TestAddons/parallel/Registry 16.09
26 TestAddons/parallel/InspektorGadget 10.85
27 TestAddons/parallel/MetricsServer 6.02
30 TestAddons/parallel/CSI 52.93
31 TestAddons/parallel/Headlamp 13.72
32 TestAddons/parallel/CloudSpanner 5.67
33 TestAddons/parallel/LocalPath 9.46
36 TestAddons/serial/GCPAuth/Namespaces 0.19
37 TestAddons/StoppedEnableDisable 12.39
38 TestCertOptions 40.18
39 TestCertExpiration 252.85
41 TestForceSystemdFlag 43.54
42 TestForceSystemdEnv 40.05
48 TestErrorSpam/setup 32.11
49 TestErrorSpam/start 0.85
50 TestErrorSpam/status 1.16
51 TestErrorSpam/pause 1.93
52 TestErrorSpam/unpause 2.08
53 TestErrorSpam/stop 1.43
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 80.54
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 43.57
60 TestFunctional/serial/KubeContext 0.06
61 TestFunctional/serial/KubectlGetPods 0.12
64 TestFunctional/serial/CacheCmd/cache/add_remote 4.23
65 TestFunctional/serial/CacheCmd/cache/add_local 1.13
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
67 TestFunctional/serial/CacheCmd/cache/list 0.06
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.25
70 TestFunctional/serial/CacheCmd/cache/delete 0.12
71 TestFunctional/serial/MinikubeKubectlCmd 0.16
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
73 TestFunctional/serial/ExtraConfig 37.24
74 TestFunctional/serial/ComponentHealth 0.11
75 TestFunctional/serial/LogsCmd 1.91
76 TestFunctional/serial/LogsFileCmd 1.88
77 TestFunctional/serial/InvalidService 4.69
79 TestFunctional/parallel/ConfigCmd 0.45
80 TestFunctional/parallel/DashboardCmd 10.14
81 TestFunctional/parallel/DryRun 0.61
82 TestFunctional/parallel/InternationalLanguage 0.21
83 TestFunctional/parallel/StatusCmd 1.18
87 TestFunctional/parallel/ServiceCmdConnect 10.8
88 TestFunctional/parallel/AddonsCmd 0.22
89 TestFunctional/parallel/PersistentVolumeClaim 26.05
91 TestFunctional/parallel/SSHCmd 0.77
92 TestFunctional/parallel/CpCmd 1.47
94 TestFunctional/parallel/FileSync 0.34
95 TestFunctional/parallel/CertSync 2.39
99 TestFunctional/parallel/NodeLabels 0.12
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
103 TestFunctional/parallel/License 0.41
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
115 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
117 TestFunctional/parallel/ProfileCmd/profile_list 0.43
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
119 TestFunctional/parallel/ServiceCmd/List 0.65
120 TestFunctional/parallel/MountCmd/any-port 7.91
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
123 TestFunctional/parallel/ServiceCmd/Format 0.57
124 TestFunctional/parallel/ServiceCmd/URL 0.5
125 TestFunctional/parallel/MountCmd/specific-port 2.92
126 TestFunctional/parallel/MountCmd/VerifyCleanup 3.07
127 TestFunctional/parallel/Version/short 0.08
128 TestFunctional/parallel/Version/components 0.99
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
133 TestFunctional/parallel/ImageCommands/ImageBuild 3.04
134 TestFunctional/parallel/ImageCommands/Setup 2.06
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.88
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.7
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.97
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
145 TestFunctional/delete_addon-resizer_images 0.08
146 TestFunctional/delete_my-image_image 0.03
147 TestFunctional/delete_minikube_cached_images 0.02
151 TestIngressAddonLegacy/StartLegacyK8sCluster 87.69
153 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.51
154 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
158 TestJSONOutput/start/Command 80.96
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.85
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.76
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 5.97
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.23
183 TestKicCustomNetwork/create_custom_network 43.05
184 TestKicCustomNetwork/use_default_bridge_network 38.43
185 TestKicExistingNetwork 34.43
186 TestKicCustomSubnet 35.84
187 TestKicStaticIP 38.04
188 TestMainNoArgs 0.09
189 TestMinikubeProfile 70.78
192 TestMountStart/serial/StartWithMountFirst 8.72
193 TestMountStart/serial/VerifyMountFirst 0.28
194 TestMountStart/serial/StartWithMountSecond 7.44
195 TestMountStart/serial/VerifyMountSecond 0.29
196 TestMountStart/serial/DeleteFirst 1.7
197 TestMountStart/serial/VerifyMountPostDelete 0.37
198 TestMountStart/serial/Stop 1.23
199 TestMountStart/serial/RestartStopped 8.32
200 TestMountStart/serial/VerifyMountPostStop 0.31
203 TestMultiNode/serial/FreshStart2Nodes 95.4
204 TestMultiNode/serial/DeployApp2Nodes 5.62
206 TestMultiNode/serial/AddNode 50.43
207 TestMultiNode/serial/ProfileList 0.4
208 TestMultiNode/serial/CopyFile 11.15
209 TestMultiNode/serial/StopNode 2.4
210 TestMultiNode/serial/StartAfterStop 13.05
211 TestMultiNode/serial/RestartKeepsNodes 126.29
212 TestMultiNode/serial/DeleteNode 5.1
213 TestMultiNode/serial/StopMultiNode 24.09
214 TestMultiNode/serial/RestartMultiNode 81.21
215 TestMultiNode/serial/ValidateNameConflict 35.86
220 TestPreload 169.75
222 TestScheduledStopUnix 110.16
225 TestInsufficientStorage 10.7
228 TestKubernetesUpgrade 383.86
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
232 TestNoKubernetes/serial/StartWithK8s 48.07
233 TestNoKubernetes/serial/StartWithStopK8s 15.62
234 TestNoKubernetes/serial/Start 8.12
235 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
236 TestNoKubernetes/serial/ProfileList 0.92
237 TestNoKubernetes/serial/Stop 1.24
238 TestNoKubernetes/serial/StartNoArgs 7.49
239 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
247 TestNetworkPlugins/group/false 4.24
251 TestStoppedBinaryUpgrade/Setup 1.02
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
262 TestPause/serial/Start 78.72
264 TestNetworkPlugins/group/auto/Start 88
265 TestNetworkPlugins/group/kindnet/Start 81.46
266 TestNetworkPlugins/group/auto/KubeletFlags 0.42
267 TestNetworkPlugins/group/auto/NetCatPod 13.53
268 TestNetworkPlugins/group/auto/DNS 0.26
269 TestNetworkPlugins/group/auto/Localhost 0.2
270 TestNetworkPlugins/group/auto/HairPin 0.23
271 TestNetworkPlugins/group/calico/Start 78.82
272 TestNetworkPlugins/group/kindnet/ControllerPod 5.08
273 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
274 TestNetworkPlugins/group/kindnet/NetCatPod 11.55
275 TestNetworkPlugins/group/kindnet/DNS 0.22
276 TestNetworkPlugins/group/kindnet/Localhost 0.21
277 TestNetworkPlugins/group/kindnet/HairPin 0.22
278 TestNetworkPlugins/group/custom-flannel/Start 73.11
279 TestNetworkPlugins/group/calico/ControllerPod 5.04
280 TestNetworkPlugins/group/calico/KubeletFlags 0.42
281 TestNetworkPlugins/group/calico/NetCatPod 13.56
282 TestNetworkPlugins/group/calico/DNS 0.31
283 TestNetworkPlugins/group/calico/Localhost 0.26
284 TestNetworkPlugins/group/calico/HairPin 0.26
285 TestNetworkPlugins/group/enable-default-cni/Start 88.93
286 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
287 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.45
288 TestNetworkPlugins/group/custom-flannel/DNS 0.33
289 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
290 TestNetworkPlugins/group/custom-flannel/HairPin 0.31
291 TestNetworkPlugins/group/flannel/Start 61.75
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.44
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
297 TestNetworkPlugins/group/flannel/ControllerPod 5.04
298 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
299 TestNetworkPlugins/group/flannel/NetCatPod 11.45
300 TestNetworkPlugins/group/bridge/Start 51.98
301 TestNetworkPlugins/group/flannel/DNS 0.51
302 TestNetworkPlugins/group/flannel/Localhost 0.26
303 TestNetworkPlugins/group/flannel/HairPin 0.23
305 TestStartStop/group/old-k8s-version/serial/FirstStart 144.77
306 TestNetworkPlugins/group/bridge/KubeletFlags 0.54
307 TestNetworkPlugins/group/bridge/NetCatPod 14.46
308 TestNetworkPlugins/group/bridge/DNS 26.17
309 TestNetworkPlugins/group/bridge/Localhost 0.22
310 TestNetworkPlugins/group/bridge/HairPin 0.21
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.46
313 TestStartStop/group/old-k8s-version/serial/DeployApp 10.6
314 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
315 TestStartStop/group/old-k8s-version/serial/Stop 12.28
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.49
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 15.27
319 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/old-k8s-version/serial/SecondStart 420.66
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 363.56
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.03
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.81
328 TestStartStop/group/embed-certs/serial/FirstStart 90.97
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
331 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.53
332 TestStartStop/group/old-k8s-version/serial/Pause 4.97
334 TestStartStop/group/newest-cni/serial/FirstStart 45.2
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
337 TestStartStop/group/newest-cni/serial/Stop 1.28
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 34.01
340 TestStartStop/group/embed-certs/serial/DeployApp 9.58
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
342 TestStartStop/group/embed-certs/serial/Stop 12.38
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
344 TestStartStop/group/embed-certs/serial/SecondStart 356.1
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
348 TestStartStop/group/newest-cni/serial/Pause 3.84
350 TestStartStop/group/no-preload/serial/FirstStart 66.69
351 TestStartStop/group/no-preload/serial/DeployApp 10.79
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.29
353 TestStartStop/group/no-preload/serial/Stop 12.17
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
355 TestStartStop/group/no-preload/serial/SecondStart 346.16
356 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.05
357 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
358 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
359 TestStartStop/group/embed-certs/serial/Pause 3.53
360 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
362 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
363 TestStartStop/group/no-preload/serial/Pause 3.41
x
+
TestDownloadOnly/v1.16.0/json-events (10.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-490357 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-490357 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.376627885s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-490357
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-490357: exit status 85 (391.907988ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-490357 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |          |
	|         | -p download-only-490357        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:39:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:39:00.310294 2499603 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:39:00.310484 2499603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:00.310493 2499603 out.go:309] Setting ErrFile to fd 2...
	I1002 11:39:00.310499 2499603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:00.310807 2499603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	W1002 11:39:00.310981 2499603 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-2494243/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-2494243/.minikube/config/config.json: no such file or directory
	I1002 11:39:00.311461 2499603 out.go:303] Setting JSON to true
	I1002 11:39:00.312561 2499603 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":69686,"bootTime":1696177054,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:39:00.312643 2499603 start.go:138] virtualization:  
	I1002 11:39:00.315728 2499603 out.go:97] [download-only-490357] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 11:39:00.318137 2499603 out.go:169] MINIKUBE_LOCATION=17340
	W1002 11:39:00.316018 2499603 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 11:39:00.316095 2499603 notify.go:220] Checking for updates...
	I1002 11:39:00.320327 2499603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:39:00.322629 2499603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:39:00.324619 2499603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:39:00.326734 2499603 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 11:39:00.330569 2499603 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 11:39:00.330862 2499603 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:39:00.359232 2499603 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:39:00.359325 2499603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:00.443673 2499603 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-02 11:39:00.433589891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:00.443784 2499603 docker.go:294] overlay module found
	I1002 11:39:00.446062 2499603 out.go:97] Using the docker driver based on user configuration
	I1002 11:39:00.446090 2499603 start.go:298] selected driver: docker
	I1002 11:39:00.446098 2499603 start.go:902] validating driver "docker" against <nil>
	I1002 11:39:00.446213 2499603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:00.514654 2499603 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-02 11:39:00.504645606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:00.514827 2499603 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 11:39:00.515099 2499603 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1002 11:39:00.515249 2499603 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 11:39:00.517471 2499603 out.go:169] Using Docker driver with root privileges
	I1002 11:39:00.519526 2499603 cni.go:84] Creating CNI manager for ""
	I1002 11:39:00.519546 2499603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:39:00.519562 2499603 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 11:39:00.519577 2499603 start_flags.go:321] config:
	{Name:download-only-490357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-490357 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:00.521522 2499603 out.go:97] Starting control plane node download-only-490357 in cluster download-only-490357
	I1002 11:39:00.521541 2499603 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 11:39:00.523315 2499603 out.go:97] Pulling base image ...
	I1002 11:39:00.523340 2499603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:39:00.523481 2499603 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 11:39:00.541173 2499603 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 11:39:00.541198 2499603 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 11:39:00.542005 2499603 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 11:39:00.542118 2499603 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 11:39:00.595268 2499603 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1002 11:39:00.595294 2499603 cache.go:57] Caching tarball of preloaded images
	I1002 11:39:00.595435 2499603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:39:00.597809 2499603 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 11:39:00.597836 2499603 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:39:00.724092 2499603 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1002 11:39:05.967968 2499603 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 11:39:07.931794 2499603 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:39:07.931954 2499603 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:39:09.048416 2499603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1002 11:39:09.048848 2499603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/download-only-490357/config.json ...
	I1002 11:39:09.048883 2499603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/download-only-490357/config.json: {Name:mk37104c5a9e925e243c6422fc85c8bd66e83703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:09.049074 2499603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:39:09.049266 2499603 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-490357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (11.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-490357 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-490357 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.842174287s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (11.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-490357
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-490357: exit status 85 (72.633678ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-490357 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |          |
	|         | -p download-only-490357        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-490357 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |          |
	|         | -p download-only-490357        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:39:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:39:11.086016 2499676 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:39:11.086186 2499676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:11.086191 2499676 out.go:309] Setting ErrFile to fd 2...
	I1002 11:39:11.086198 2499676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:11.086521 2499676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	W1002 11:39:11.086672 2499676 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-2494243/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-2494243/.minikube/config/config.json: no such file or directory
	I1002 11:39:11.086949 2499676 out.go:303] Setting JSON to true
	I1002 11:39:11.087975 2499676 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":69697,"bootTime":1696177054,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:39:11.088068 2499676 start.go:138] virtualization:  
	I1002 11:39:11.115176 2499676 out.go:97] [download-only-490357] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 11:39:11.148033 2499676 out.go:169] MINIKUBE_LOCATION=17340
	I1002 11:39:11.115499 2499676 notify.go:220] Checking for updates...
	I1002 11:39:11.223641 2499676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:39:11.245824 2499676 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:39:11.278898 2499676 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:39:11.310753 2499676 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 11:39:11.327525 2499676 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 11:39:11.328093 2499676 config.go:182] Loaded profile config "download-only-490357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1002 11:39:11.328179 2499676 start.go:810] api.Load failed for download-only-490357: filestore "download-only-490357": Docker machine "download-only-490357" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 11:39:11.328305 2499676 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 11:39:11.328336 2499676 start.go:810] api.Load failed for download-only-490357: filestore "download-only-490357": Docker machine "download-only-490357" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 11:39:11.352296 2499676 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:39:11.352384 2499676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:11.422680 2499676 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2023-10-02 11:39:11.41239858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:11.422791 2499676 docker.go:294] overlay module found
	I1002 11:39:11.425048 2499676 out.go:97] Using the docker driver based on existing profile
	I1002 11:39:11.425074 2499676 start.go:298] selected driver: docker
	I1002 11:39:11.425082 2499676 start.go:902] validating driver "docker" against &{Name:download-only-490357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-490357 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:11.425280 2499676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:39:11.496052 2499676 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2023-10-02 11:39:11.486412493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:39:11.496641 2499676 cni.go:84] Creating CNI manager for ""
	I1002 11:39:11.496660 2499676 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 11:39:11.496670 2499676 start_flags.go:321] config:
	{Name:download-only-490357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-490357 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:11.498910 2499676 out.go:97] Starting control plane node download-only-490357 in cluster download-only-490357
	I1002 11:39:11.498938 2499676 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 11:39:11.500663 2499676 out.go:97] Pulling base image ...
	I1002 11:39:11.500699 2499676 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:11.500791 2499676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 11:39:11.518258 2499676 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 11:39:11.518280 2499676 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 11:39:11.518398 2499676 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 11:39:11.518417 2499676 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 11:39:11.518422 2499676 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 11:39:11.518444 2499676 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 11:39:11.563667 2499676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 11:39:11.563703 2499676 cache.go:57] Caching tarball of preloaded images
	I1002 11:39:11.563916 2499676 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:11.566132 2499676 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 11:39:11.566174 2499676 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1002 11:39:11.687507 2499676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17340-2494243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-490357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (13.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:187: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.384991265s)
--- PASS: TestDownloadOnly/DeleteAll (13.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-490357
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-393333 --alsologtostderr --binary-mirror http://127.0.0.1:46105 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-393333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-393333
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (138.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p addons-346248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p addons-346248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m18.36483043s)
--- PASS: TestAddons/Setup (138.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 53.515324ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-q9gtk" [f5a09aa6-1c6f-488e-88ee-7656c207927e] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014966728s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wxxk2" [51f22832-d869-487a-baa6-1753d0735683] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013127778s
addons_test.go:318: (dbg) Run:  kubectl --context addons-346248 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-346248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-346248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.839547817s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 ip
2023/10/02 11:42:11 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:366: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f6ptr" [d335dc60-1d9c-4ed9-8dad-4bcb31122c2d] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013589756s
addons_test.go:819: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-346248
addons_test.go:819: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-346248: (5.837623326s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 7.687262ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-grq99" [9fe2087a-0c7f-4aa1-a866-60cbff2676c3] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.023876862s
addons_test.go:393: (dbg) Run:  kubectl --context addons-346248 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 10.703077ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-346248 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-346248 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cbf2ef4e-c6e1-441f-8aa6-02508fd92c70] Pending
helpers_test.go:344: "task-pv-pod" [cbf2ef4e-c6e1-441f-8aa6-02508fd92c70] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cbf2ef4e-c6e1-441f-8aa6-02508fd92c70] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.018958918s
addons_test.go:562: (dbg) Run:  kubectl --context addons-346248 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-346248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-346248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-346248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-346248 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-346248 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-346248 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-346248 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [87928c56-fe73-4f02-b89f-a41f85fe170e] Pending
helpers_test.go:344: "task-pv-pod-restore" [87928c56-fe73-4f02-b89f-a41f85fe170e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [87928c56-fe73-4f02-b89f-a41f85fe170e] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.020812645s
addons_test.go:604: (dbg) Run:  kubectl --context addons-346248 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-346248 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-346248 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-arm64 -p addons-346248 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.833722763s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-346248 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-346248 --alsologtostderr -v=1: (1.679460352s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-m56g6" [f784bbe8-bc6e-402b-af71-4a55239bd0f7] Pending
helpers_test.go:344: "headlamp-58b88cff49-m56g6" [f784bbe8-bc6e-402b-af71-4a55239bd0f7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-m56g6" [f784bbe8-bc6e-402b-af71-4a55239bd0f7] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.036001744s
--- PASS: TestAddons/parallel/Headlamp (13.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-6ccgf" [1a6b2d4d-3f93-42e4-83f2-76eca3238fcd] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010228513s
addons_test.go:838: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-346248
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-346248 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-346248 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c5d5a0b-3908-46a7-a74d-64378dba8cf7] Pending
helpers_test.go:344: "test-local-path" [9c5d5a0b-3908-46a7-a74d-64378dba8cf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c5d5a0b-3908-46a7-a74d-64378dba8cf7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c5d5a0b-3908-46a7-a74d-64378dba8cf7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.012703522s
addons_test.go:869: (dbg) Run:  kubectl --context addons-346248 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 ssh "cat /opt/local-path-provisioner/pvc-630de221-724f-4414-8f37-7bb6fe233ffc_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-346248 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-346248 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-arm64 -p addons-346248 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-346248 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-346248 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-346248
addons_test.go:150: (dbg) Done: out/minikube-linux-arm64 stop -p addons-346248: (12.071511564s)
addons_test.go:154: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-346248
addons_test.go:158: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-346248
addons_test.go:163: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-346248
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (40.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-926506 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-926506 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.455165133s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-926506 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-926506 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-926506 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-926506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-926506
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-926506: (2.009429547s)
--- PASS: TestCertOptions (40.18s)

                                                
                                    
x
+
TestCertExpiration (252.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-752167 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1002 12:16:56.260939 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-752167 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.778588888s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-752167 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-752167 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.978993106s)
helpers_test.go:175: Cleaning up "cert-expiration-752167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-752167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-752167: (2.086764679s)
--- PASS: TestCertExpiration (252.85s)

                                                
                                    
x
+
TestForceSystemdFlag (43.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-990972 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-990972 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.238079407s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-990972 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-990972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-990972
E1002 12:17:14.743968 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-990972: (2.909398159s)
--- PASS: TestForceSystemdFlag (43.54s)

                                                
                                    
x
+
TestForceSystemdEnv (40.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-193623 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-193623 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.782013804s)
helpers_test.go:175: Cleaning up "force-systemd-env-193623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-193623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-193623: (3.266526699s)
--- PASS: TestForceSystemdEnv (40.05s)

                                                
                                    
x
+
TestErrorSpam/setup (32.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-383418 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-383418 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-383418 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-383418 --driver=docker  --container-runtime=crio: (32.107760584s)
--- PASS: TestErrorSpam/setup (32.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 pause
--- PASS: TestErrorSpam/pause (1.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 unpause
--- PASS: TestErrorSpam/unpause (2.08s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 stop: (1.228279103s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383418 --log_dir /tmp/nospam-383418 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17340-2494243/.minikube/files/etc/test/nested/copy/2499598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1002 11:46:56.257229 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.263840 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.274124 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.294443 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.334757 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.415050 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.575180 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:56.895650 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:57.535874 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:46:58.816050 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:47:01.376536 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:47:06.497622 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:47:16.738666 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:47:37.218842 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-262988 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.543001324s)
--- PASS: TestFunctional/serial/StartWithProxy (80.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --alsologtostderr -v=8
E1002 11:48:18.179767 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-262988 --alsologtostderr -v=8: (43.564697898s)
functional_test.go:659: soft start took 43.565299753s for "functional-262988" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-262988 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:3.1: (1.47772005s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:3.3: (1.448373991s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 cache add registry.k8s.io/pause:latest: (1.301620434s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-262988 /tmp/TestFunctionalserialCacheCmdcacheadd_local1946569591/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache add minikube-local-cache-test:functional-262988
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache delete minikube-local-cache-test:functional-262988
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-262988
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (330.775479ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 cache reload: (1.208615391s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 kubectl -- --context functional-262988 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-262988 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-262988 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.242899242s)
functional_test.go:757: restart took 37.243007541s for "functional-262988" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-262988 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 logs: (1.905485267s)
--- PASS: TestFunctional/serial/LogsCmd (1.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 logs --file /tmp/TestFunctionalserialLogsFileCmd2819657758/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 logs --file /tmp/TestFunctionalserialLogsFileCmd2819657758/001/logs.txt: (1.881655531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-262988 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-262988
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-262988: exit status 115 (676.384798ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32591 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-262988 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 config get cpus: exit status 14 (67.354437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 config get cpus: exit status 14 (69.595268ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262988 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262988 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2524487: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262988 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (212.727842ms)

                                                
                                                
-- stdout --
	* [functional-262988] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:49:56.343116 2524121 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:49:56.343340 2524121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:49:56.343369 2524121 out.go:309] Setting ErrFile to fd 2...
	I1002 11:49:56.343391 2524121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:49:56.343668 2524121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 11:49:56.344080 2524121 out.go:303] Setting JSON to false
	I1002 11:49:56.345188 2524121 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":70342,"bootTime":1696177054,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:49:56.345302 2524121 start.go:138] virtualization:  
	I1002 11:49:56.347725 2524121 out.go:177] * [functional-262988] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 11:49:56.350734 2524121 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:49:56.350898 2524121 notify.go:220] Checking for updates...
	I1002 11:49:56.358117 2524121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:49:56.360996 2524121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:49:56.363152 2524121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:49:56.365646 2524121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 11:49:56.367706 2524121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:49:56.370128 2524121 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:49:56.370675 2524121 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:49:56.402779 2524121 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:49:56.402941 2524121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:49:56.486958 2524121 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-02 11:49:56.475688948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:49:56.487063 2524121 docker.go:294] overlay module found
	I1002 11:49:56.489543 2524121 out.go:177] * Using the docker driver based on existing profile
	I1002 11:49:56.491777 2524121 start.go:298] selected driver: docker
	I1002 11:49:56.491801 2524121 start.go:902] validating driver "docker" against &{Name:functional-262988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-262988 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:49:56.491935 2524121 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:49:56.495312 2524121 out.go:177] 
	W1002 11:49:56.497422 2524121 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 11:49:56.499513 2524121 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262988 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262988 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.272176ms)

                                                
                                                
-- stdout --
	* [functional-262988] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:49:56.135947 2524082 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:49:56.136143 2524082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:49:56.136184 2524082 out.go:309] Setting ErrFile to fd 2...
	I1002 11:49:56.136204 2524082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:49:56.136617 2524082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 11:49:56.137062 2524082 out.go:303] Setting JSON to false
	I1002 11:49:56.138223 2524082 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":70342,"bootTime":1696177054,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 11:49:56.138341 2524082 start.go:138] virtualization:  
	I1002 11:49:56.140850 2524082 out.go:177] * [functional-262988] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1002 11:49:56.143059 2524082 notify.go:220] Checking for updates...
	I1002 11:49:56.143113 2524082 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:49:56.145548 2524082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:49:56.147232 2524082 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 11:49:56.149233 2524082 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 11:49:56.151026 2524082 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 11:49:56.152692 2524082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:49:56.155045 2524082 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:49:56.155627 2524082 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:49:56.181158 2524082 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 11:49:56.181265 2524082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 11:49:56.276934 2524082 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-02 11:49:56.266634784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 11:49:56.277048 2524082 docker.go:294] overlay module found
	I1002 11:49:56.279113 2524082 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1002 11:49:56.281222 2524082 start.go:298] selected driver: docker
	I1002 11:49:56.281242 2524082 start.go:902] validating driver "docker" against &{Name:functional-262988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-262988 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:49:56.281352 2524082 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:49:56.283662 2524082 out.go:177] 
	W1002 11:49:56.285557 2524082 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 11:49:56.287406 2524082 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-262988 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-262988 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wcrnj" [7cac7edf-9d85-48e9-928f-aa4a5c7abda9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wcrnj" [7cac7edf-9d85-48e9-928f-aa4a5c7abda9] Running
E1002 11:49:40.100036 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.029949183s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31152
functional_test.go:1674: http://192.168.49.2:31152: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-wcrnj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31152
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2878e20a-0e15-4bc5-81ec-f18d8d90b9c3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.033655673s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-262988 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-262988 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-262988 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262988 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a1eac2e-56fc-4c0f-8781-682ce8f6eba9] Pending
helpers_test.go:344: "sp-pod" [5a1eac2e-56fc-4c0f-8781-682ce8f6eba9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a1eac2e-56fc-4c0f-8781-682ce8f6eba9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.019448009s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-262988 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-262988 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262988 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [42c06c72-c8da-4171-b862-87522be7f9d8] Pending
helpers_test.go:344: "sp-pod" [42c06c72-c8da-4171-b862-87522be7f9d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [42c06c72-c8da-4171-b862-87522be7f9d8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.017487978s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-262988 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh -n functional-262988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 cp functional-262988:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1863451617/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh -n functional-262988 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2499598/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /etc/test/nested/copy/2499598/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2499598.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /etc/ssl/certs/2499598.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2499598.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /usr/share/ca-certificates/2499598.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/24995982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /etc/ssl/certs/24995982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/24995982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /usr/share/ca-certificates/24995982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-262988 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo systemctl is-active docker"
2023/10/02 11:50:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "sudo systemctl is-active docker": exit status 1 (437.779545ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "sudo systemctl is-active containerd": exit status 1 (327.134418ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2522290: os: process already finished
helpers_test.go:502: unable to terminate pid 2522163: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-262988 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fc5ed7f3-b22d-4100-acd5-3426a39d5990] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fc5ed7f3-b22d-4100-acd5-3426a39d5990] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.017573233s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-262988 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.117.218 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-262988 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-262988 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-262988 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-x65w8" [acdcfc2c-06b7-470b-8fe0-801ece3fcfa9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-x65w8" [acdcfc2c-06b7-470b-8fe0-801ece3fcfa9] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.020343735s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "368.685142ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "57.345147ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "342.487206ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "68.98232ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdany-port1391266508/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696247392130051560" to /tmp/TestFunctionalparallelMountCmdany-port1391266508/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696247392130051560" to /tmp/TestFunctionalparallelMountCmdany-port1391266508/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696247392130051560" to /tmp/TestFunctionalparallelMountCmdany-port1391266508/001/test-1696247392130051560
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (514.123408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 11:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 11:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 11:49 test-1696247392130051560
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh cat /mount-9p/test-1696247392130051560
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-262988 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [21e411dd-d275-4b16-973e-07032dfc9793] Pending
helpers_test.go:344: "busybox-mount" [21e411dd-d275-4b16-973e-07032dfc9793] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [21e411dd-d275-4b16-973e-07032dfc9793] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [21e411dd-d275-4b16-973e-07032dfc9793] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.021685497s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-262988 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdany-port1391266508/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service list -o json
functional_test.go:1493: Took "551.851598ms" to run "out/minikube-linux-arm64 -p functional-262988 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31636
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31636
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdspecific-port2092869318/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (763.888592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdspecific-port2092869318/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "sudo umount -f /mount-9p": exit status 1 (411.971099ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-262988 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdspecific-port2092869318/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T" /mount1: exit status 1 (1.160181388s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-262988 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262988 /tmp/TestFunctionalparallelMountCmdVerifyCleanup49641875/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262988 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-262988
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262988 image ls --format short --alsologtostderr:
I1002 11:50:26.562769 2526687 out.go:296] Setting OutFile to fd 1 ...
I1002 11:50:26.562918 2526687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.562927 2526687 out.go:309] Setting ErrFile to fd 2...
I1002 11:50:26.562933 2526687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.563181 2526687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
I1002 11:50:26.563888 2526687 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.564043 2526687 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.564720 2526687 cli_runner.go:164] Run: docker container inspect functional-262988 --format={{.State.Status}}
I1002 11:50:26.595958 2526687 ssh_runner.go:195] Run: systemctl --version
I1002 11:50:26.596026 2526687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262988
I1002 11:50:26.620911 2526687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35882 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/functional-262988/id_rsa Username:docker}
I1002 11:50:26.718913 2526687 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262988 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | df8fd1ca35d66 | 45.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| docker.io/library/nginx                 | latest             | 2a4fbb36e9660 | 196MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-262988  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262988 image ls --format table --alsologtostderr:
I1002 11:50:27.206142 2526819 out.go:296] Setting OutFile to fd 1 ...
I1002 11:50:27.206274 2526819 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:27.206281 2526819 out.go:309] Setting ErrFile to fd 2...
I1002 11:50:27.206287 2526819 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:27.206526 2526819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
I1002 11:50:27.207249 2526819 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:27.207410 2526819 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:27.209956 2526819 cli_runner.go:164] Run: docker container inspect functional-262988 --format={{.State.Status}}
I1002 11:50:27.230703 2526819 ssh_runner.go:195] Run: systemctl --version
I1002 11:50:27.230759 2526819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262988
I1002 11:50:27.250968 2526819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35882 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/functional-262988/id_rsa Username:docker}
I1002 11:50:27.353077 2526819 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262988 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf386
96cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/p
ause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef","docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45331256"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396
a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-262988"],"size":"34114467"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha
256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196620"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[
"registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@
sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262988 image ls --format json --alsologtostderr:
I1002 11:50:26.921257 2526748 out.go:296] Setting OutFile to fd 1 ...
I1002 11:50:26.924486 2526748 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.924504 2526748 out.go:309] Setting ErrFile to fd 2...
I1002 11:50:26.924512 2526748 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.924827 2526748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
I1002 11:50:26.925586 2526748 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.925730 2526748 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.926216 2526748 cli_runner.go:164] Run: docker container inspect functional-262988 --format={{.State.Status}}
I1002 11:50:26.947781 2526748 ssh_runner.go:195] Run: systemctl --version
I1002 11:50:26.947842 2526748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262988
I1002 11:50:26.974532 2526748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35882 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/functional-262988/id_rsa Username:docker}
I1002 11:50:27.078664 2526748 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262988 image ls --format yaml --alsologtostderr:
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
- docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003
repoTags:
- docker.io/library/nginx:alpine
size: "45331256"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324
repoTags:
- docker.io/library/nginx:latest
size: "196196620"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-262988
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262988 image ls --format yaml --alsologtostderr:
I1002 11:50:26.576383 2526688 out.go:296] Setting OutFile to fd 1 ...
I1002 11:50:26.576656 2526688 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.576687 2526688 out.go:309] Setting ErrFile to fd 2...
I1002 11:50:26.576707 2526688 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:26.577012 2526688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
I1002 11:50:26.577850 2526688 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.578108 2526688 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:26.578696 2526688 cli_runner.go:164] Run: docker container inspect functional-262988 --format={{.State.Status}}
I1002 11:50:26.607729 2526688 ssh_runner.go:195] Run: systemctl --version
I1002 11:50:26.607785 2526688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262988
I1002 11:50:26.639587 2526688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35882 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/functional-262988/id_rsa Username:docker}
I1002 11:50:26.747349 2526688 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262988 ssh pgrep buildkitd: exit status 1 (379.819678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image build -t localhost/my-image:functional-262988 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 image build -t localhost/my-image:functional-262988 testdata/build --alsologtostderr: (2.424101436s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262988 image build -t localhost/my-image:functional-262988 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 257becac1e2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-262988
--> fe4445e6c50
Successfully tagged localhost/my-image:functional-262988
fe4445e6c5059ceb2a0968bc354228b5bd5411b5c19b5a5a85f6a306d4ebf03d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262988 image build -t localhost/my-image:functional-262988 testdata/build --alsologtostderr:
I1002 11:50:27.239893 2526824 out.go:296] Setting OutFile to fd 1 ...
I1002 11:50:27.241019 2526824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:27.241073 2526824 out.go:309] Setting ErrFile to fd 2...
I1002 11:50:27.241094 2526824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 11:50:27.241459 2526824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
I1002 11:50:27.242221 2526824 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:27.243639 2526824 config.go:182] Loaded profile config "functional-262988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 11:50:27.244478 2526824 cli_runner.go:164] Run: docker container inspect functional-262988 --format={{.State.Status}}
I1002 11:50:27.277793 2526824 ssh_runner.go:195] Run: systemctl --version
I1002 11:50:27.277848 2526824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262988
I1002 11:50:27.299146 2526824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35882 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/functional-262988/id_rsa Username:docker}
I1002 11:50:27.400822 2526824 build_images.go:151] Building image from path: /tmp/build.1050471288.tar
I1002 11:50:27.400957 2526824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 11:50:27.418762 2526824 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1050471288.tar
I1002 11:50:27.424935 2526824 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1050471288.tar: stat -c "%s %y" /var/lib/minikube/build/build.1050471288.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1050471288.tar': No such file or directory
I1002 11:50:27.424977 2526824 ssh_runner.go:362] scp /tmp/build.1050471288.tar --> /var/lib/minikube/build/build.1050471288.tar (3072 bytes)
I1002 11:50:27.460329 2526824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1050471288
I1002 11:50:27.471741 2526824 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1050471288 -xf /var/lib/minikube/build/build.1050471288.tar
I1002 11:50:27.483717 2526824 crio.go:297] Building image: /var/lib/minikube/build/build.1050471288
I1002 11:50:27.483791 2526824 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-262988 /var/lib/minikube/build/build.1050471288 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 11:50:29.551498 2526824 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-262988 /var/lib/minikube/build/build.1050471288 --cgroup-manager=cgroupfs: (2.067683974s)
I1002 11:50:29.551572 2526824 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1050471288
I1002 11:50:29.562492 2526824 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1050471288.tar
I1002 11:50:29.573767 2526824 build_images.go:207] Built localhost/my-image:functional-262988 from /tmp/build.1050471288.tar
I1002 11:50:29.573796 2526824 build_images.go:123] succeeded building to: functional-262988
I1002 11:50:29.573801 2526824 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.024541329s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-262988
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr: (4.633138837s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr: (2.71391834s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.852115981s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-262988
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 image load --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr: (3.586881318s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image save gcr.io/google-containers/addon-resizer:functional-262988 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image rm gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-262988 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.058367926s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-262988
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-262988 image save --daemon gcr.io/google-containers/addon-resizer:functional-262988 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-262988
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-262988
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-262988
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-262988
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-999051 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 11:51:56.256410 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-999051 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m27.689267159s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons enable ingress --alsologtostderr -v=5: (13.507104938s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-999051 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-667499 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1002 11:55:46.665004 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-667499 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.954782731s)
--- PASS: TestJSONOutput/start/Command (80.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-667499 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-667499 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-667499 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-667499 --output=json --user=testUser: (5.970752469s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-068226 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-068226 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.099474ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"57994eed-f473-4bb6-8917-572301db1d1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-068226] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3054bf5-419c-476d-8d47-e0a97699bdde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"6a74ff14-b351-4518-a133-0d3ca9696619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"561ef7d9-79e9-462b-bd04-0d0eb8163c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig"}}
	{"specversion":"1.0","id":"0265ab09-5b9d-466d-bc96-d1b03f8ed977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube"}}
	{"specversion":"1.0","id":"4a9781b2-a6be-422a-9ff3-b8ffba2a1b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d9bedf44-2015-4ab6-8e19-812d11a2b283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f760143-075e-40ce-95ae-e11bd357b3a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-068226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-068226
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-198187 --network=
E1002 11:56:56.256573 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 11:57:08.585718 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:57:14.745623 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:14.752620 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:14.764700 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:14.785013 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:14.825265 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:14.905614 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:15.066055 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:15.386603 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:16.027743 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:17.307955 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:19.868492 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:24.988936 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-198187 --network=: (40.910001595s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-198187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-198187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-198187: (2.111553447s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-739899 --network=bridge
E1002 11:57:35.229141 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 11:57:55.709553 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-739899 --network=bridge: (36.35781857s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-739899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-739899
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-739899: (2.048709046s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.43s)

                                                
                                    
x
+
TestKicExistingNetwork (34.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-516705 --network=existing-network
E1002 11:58:36.669740 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-516705 --network=existing-network: (32.29237396s)
helpers_test.go:175: Cleaning up "existing-network-516705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-516705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-516705: (1.978189944s)
--- PASS: TestKicExistingNetwork (34.43s)

                                                
                                    
x
+
TestKicCustomSubnet (35.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-873470 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-873470 --subnet=192.168.60.0/24: (33.67049584s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-873470 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-873470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-873470
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-873470: (2.142968697s)
--- PASS: TestKicCustomSubnet (35.84s)

                                                
                                    
x
+
TestKicStaticIP (38.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-071627 --static-ip=192.168.200.200
E1002 11:59:24.740651 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 11:59:52.425935 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-071627 --static-ip=192.168.200.200: (35.594628457s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-071627 ip
helpers_test.go:175: Cleaning up "static-ip-071627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-071627
E1002 11:59:58.590784 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-071627: (2.290521585s)
--- PASS: TestKicStaticIP (38.04s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (70.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-965789 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-965789 --driver=docker  --container-runtime=crio: (30.054702921s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-968907 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-968907 --driver=docker  --container-runtime=crio: (35.44426401s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-965789
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-968907
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-968907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-968907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-968907: (2.002467947s)
helpers_test.go:175: Cleaning up "first-965789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-965789
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-965789: (2.001312795s)
--- PASS: TestMinikubeProfile (70.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-162050 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-162050 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.720927572s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-162050 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-164003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-164003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.434666162s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-164003 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-162050 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-162050 --alsologtostderr -v=5: (1.69597565s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-164003 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-164003
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-164003: (1.224936041s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-164003
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-164003: (7.320218562s)
--- PASS: TestMountStart/serial/RestartStopped (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-164003 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 12:01:56.256716 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:02:14.744251 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 12:02:42.431751 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.828435881s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- rollout status deployment/busybox
E1002 12:03:19.302766 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-361100 -- rollout status deployment/busybox: (3.431010931s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-4tnjh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361100 -- exec busybox-5bc68d56bd-wmx6q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-361100 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-361100 -v 3 --alsologtostderr: (49.70395561s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.43s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp testdata/cp-test.txt multinode-361100:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3732587496/001/cp-test_multinode-361100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100:/home/docker/cp-test.txt multinode-361100-m02:/home/docker/cp-test_multinode-361100_multinode-361100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test_multinode-361100_multinode-361100-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100:/home/docker/cp-test.txt multinode-361100-m03:/home/docker/cp-test_multinode-361100_multinode-361100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test_multinode-361100_multinode-361100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp testdata/cp-test.txt multinode-361100-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3732587496/001/cp-test_multinode-361100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m02:/home/docker/cp-test.txt multinode-361100:/home/docker/cp-test_multinode-361100-m02_multinode-361100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test_multinode-361100-m02_multinode-361100.txt"
E1002 12:04:24.740573 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m02:/home/docker/cp-test.txt multinode-361100-m03:/home/docker/cp-test_multinode-361100-m02_multinode-361100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test_multinode-361100-m02_multinode-361100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp testdata/cp-test.txt multinode-361100-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3732587496/001/cp-test_multinode-361100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m03:/home/docker/cp-test.txt multinode-361100:/home/docker/cp-test_multinode-361100-m03_multinode-361100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100 "sudo cat /home/docker/cp-test_multinode-361100-m03_multinode-361100.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 cp multinode-361100-m03:/home/docker/cp-test.txt multinode-361100-m02:/home/docker/cp-test_multinode-361100-m03_multinode-361100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 ssh -n multinode-361100-m02 "sudo cat /home/docker/cp-test_multinode-361100-m03_multinode-361100-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-361100 node stop m03: (1.239878369s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361100 status: exit status 7 (590.795435ms)

                                                
                                                
-- stdout --
	multinode-361100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-361100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-361100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr: exit status 7 (570.603872ms)

                                                
                                                
-- stdout --
	multinode-361100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-361100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-361100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:04:31.369217 2573224 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:04:31.369490 2573224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:04:31.369518 2573224 out.go:309] Setting ErrFile to fd 2...
	I1002 12:04:31.369537 2573224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:04:31.369838 2573224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:04:31.370072 2573224 out.go:303] Setting JSON to false
	I1002 12:04:31.370254 2573224 mustload.go:65] Loading cluster: multinode-361100
	I1002 12:04:31.370280 2573224 notify.go:220] Checking for updates...
	I1002 12:04:31.370755 2573224 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:04:31.370791 2573224 status.go:255] checking status of multinode-361100 ...
	I1002 12:04:31.371329 2573224 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:04:31.391723 2573224 status.go:330] multinode-361100 host status = "Running" (err=<nil>)
	I1002 12:04:31.391763 2573224 host.go:66] Checking if "multinode-361100" exists ...
	I1002 12:04:31.392258 2573224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100
	I1002 12:04:31.416440 2573224 host.go:66] Checking if "multinode-361100" exists ...
	I1002 12:04:31.416803 2573224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:04:31.416861 2573224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100
	I1002 12:04:31.449200 2573224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35947 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100/id_rsa Username:docker}
	I1002 12:04:31.547465 2573224 ssh_runner.go:195] Run: systemctl --version
	I1002 12:04:31.553430 2573224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:04:31.567655 2573224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:04:31.641381 2573224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:65 SystemTime:2023-10-02 12:04:31.631033473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:04:31.642103 2573224 kubeconfig.go:92] found "multinode-361100" server: "https://192.168.58.2:8443"
	I1002 12:04:31.642128 2573224 api_server.go:166] Checking apiserver status ...
	I1002 12:04:31.642174 2573224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:04:31.655574 2573224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1002 12:04:31.668214 2573224 api_server.go:182] apiserver freezer: "8:freezer:/docker/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/crio/crio-7eea9ad917e7cfa7bbb9016a2f9c82eb4fae90b6ea9a9b2e8f7dff9679df8d7b"
	I1002 12:04:31.668291 2573224 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/506dd6922a980e458f6da9ba5667ad60afdf56bc377cf5d8b7da92e45a291166/crio/crio-7eea9ad917e7cfa7bbb9016a2f9c82eb4fae90b6ea9a9b2e8f7dff9679df8d7b/freezer.state
	I1002 12:04:31.679262 2573224 api_server.go:204] freezer state: "THAWED"
	I1002 12:04:31.679292 2573224 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 12:04:31.689096 2573224 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 12:04:31.689130 2573224 status.go:421] multinode-361100 apiserver status = Running (err=<nil>)
	I1002 12:04:31.689146 2573224 status.go:257] multinode-361100 status: &{Name:multinode-361100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 12:04:31.689165 2573224 status.go:255] checking status of multinode-361100-m02 ...
	I1002 12:04:31.689477 2573224 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Status}}
	I1002 12:04:31.707636 2573224 status.go:330] multinode-361100-m02 host status = "Running" (err=<nil>)
	I1002 12:04:31.707662 2573224 host.go:66] Checking if "multinode-361100-m02" exists ...
	I1002 12:04:31.708030 2573224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361100-m02
	I1002 12:04:31.726801 2573224 host.go:66] Checking if "multinode-361100-m02" exists ...
	I1002 12:04:31.727167 2573224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 12:04:31.727222 2573224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361100-m02
	I1002 12:04:31.747498 2573224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35952 SSHKeyPath:/home/jenkins/minikube-integration/17340-2494243/.minikube/machines/multinode-361100-m02/id_rsa Username:docker}
	I1002 12:04:31.843233 2573224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:04:31.857814 2573224 status.go:257] multinode-361100-m02 status: &{Name:multinode-361100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 12:04:31.857851 2573224 status.go:255] checking status of multinode-361100-m03 ...
	I1002 12:04:31.858173 2573224 cli_runner.go:164] Run: docker container inspect multinode-361100-m03 --format={{.State.Status}}
	I1002 12:04:31.879248 2573224 status.go:330] multinode-361100-m03 host status = "Stopped" (err=<nil>)
	I1002 12:04:31.879272 2573224 status.go:343] host is not running, skipping remaining checks
	I1002 12:04:31.879280 2573224 status.go:257] multinode-361100-m03 status: &{Name:multinode-361100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-361100 node start m03 --alsologtostderr: (12.146542175s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (126.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361100
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-361100
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-361100: (25.14287994s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361100 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361100 --wait=true -v=8 --alsologtostderr: (1m41.009913841s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361100
--- PASS: TestMultiNode/serial/RestartKeepsNodes (126.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-361100 node delete m03: (4.352214289s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
E1002 12:06:56.257469 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/DeleteNode (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 stop
E1002 12:07:14.744455 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-361100 stop: (23.901628201s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361100 status: exit status 7 (88.118499ms)

                                                
                                                
-- stdout --
	multinode-361100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-361100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr: exit status 7 (95.304869ms)

                                                
                                                
-- stdout --
	multinode-361100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-361100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:07:20.368723 2581267 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:07:20.369055 2581267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:07:20.369087 2581267 out.go:309] Setting ErrFile to fd 2...
	I1002 12:07:20.369108 2581267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:07:20.369420 2581267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:07:20.369640 2581267 out.go:303] Setting JSON to false
	I1002 12:07:20.369880 2581267 notify.go:220] Checking for updates...
	I1002 12:07:20.370697 2581267 mustload.go:65] Loading cluster: multinode-361100
	I1002 12:07:20.371398 2581267 config.go:182] Loaded profile config "multinode-361100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:07:20.371455 2581267 status.go:255] checking status of multinode-361100 ...
	I1002 12:07:20.372798 2581267 cli_runner.go:164] Run: docker container inspect multinode-361100 --format={{.State.Status}}
	I1002 12:07:20.391947 2581267 status.go:330] multinode-361100 host status = "Stopped" (err=<nil>)
	I1002 12:07:20.391967 2581267 status.go:343] host is not running, skipping remaining checks
	I1002 12:07:20.391974 2581267 status.go:257] multinode-361100 status: &{Name:multinode-361100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 12:07:20.391998 2581267 status.go:255] checking status of multinode-361100-m02 ...
	I1002 12:07:20.392333 2581267 cli_runner.go:164] Run: docker container inspect multinode-361100-m02 --format={{.State.Status}}
	I1002 12:07:20.410426 2581267 status.go:330] multinode-361100-m02 host status = "Stopped" (err=<nil>)
	I1002 12:07:20.410445 2581267 status.go:343] host is not running, skipping remaining checks
	I1002 12:07:20.410452 2581267 status.go:257] multinode-361100-m02 status: &{Name:multinode-361100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361100 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361100 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.449742829s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361100 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361100
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361100-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-361100-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.565404ms)

                                                
                                                
-- stdout --
	* [multinode-361100-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-361100-m02' is duplicated with machine name 'multinode-361100-m02' in profile 'multinode-361100'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361100-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361100-m03 --driver=docker  --container-runtime=crio: (33.392686541s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-361100
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-361100: exit status 80 (354.555729ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-361100
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-361100-m03 already exists in multinode-361100-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-361100-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-361100-m03: (1.972544694s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.86s)

                                                
                                    
x
+
TestPreload (169.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-502559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1002 12:09:24.740940 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:10:47.786487 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-502559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m27.290566865s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-502559 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-502559 image pull gcr.io/k8s-minikube/busybox: (2.231694425s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-502559
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-502559: (5.846857232s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-502559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1002 12:11:56.256647 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-502559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m11.740017013s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-502559 image list
helpers_test.go:175: Cleaning up "test-preload-502559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-502559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-502559: (2.406213636s)
--- PASS: TestPreload (169.75s)

                                                
                                    
x
+
TestScheduledStopUnix (110.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-018005 --memory=2048 --driver=docker  --container-runtime=crio
E1002 12:12:14.744238 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-018005 --memory=2048 --driver=docker  --container-runtime=crio: (34.007006211s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018005 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-018005 -n scheduled-stop-018005
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018005 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018005 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018005 -n scheduled-stop-018005
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-018005
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018005 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 12:13:37.792049 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-018005
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-018005: exit status 7 (73.366779ms)

                                                
                                                
-- stdout --
	scheduled-stop-018005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018005 -n scheduled-stop-018005
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018005 -n scheduled-stop-018005: exit status 7 (70.506749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-018005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-018005
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-018005: (4.454279447s)
--- PASS: TestScheduledStopUnix (110.16s)

                                                
                                    
x
+
TestInsufficientStorage (10.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-718178 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-718178 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.083682s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6560fc26-c53a-4641-bd90-4b1832eb9eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-718178] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e66e2011-ff9f-4a35-8dcb-78175adf5aa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"f5a594bf-a428-4327-8140-684a0229d29a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"252847ed-16d9-4fee-885d-d1865719d081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig"}}
	{"specversion":"1.0","id":"dd30a717-a6de-4c6e-bce6-e0f7c0c4f8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube"}}
	{"specversion":"1.0","id":"1178e603-e4c9-4185-bf4c-e72d04019b22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"62fb79ac-08c6-4132-bf39-992784d9c4b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e31f00f8-bf79-4800-9958-fde6bf44c76c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dc261ae5-5295-4211-9004-c3dc386a3f37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"de88ddd9-7d1b-44ab-8a48-ed497da1761e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d5aadf6-ebaf-4f7c-b1d2-46538dd1bcca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ba8cad9e-3be8-4184-9b28-783fc508b9ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-718178 in cluster insufficient-storage-718178","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6e58d3b-db8c-4c3b-b8c0-0cb299c8a410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"585c9f1d-878a-4c72-994f-4af9ad836515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bd5acb9-5e67-48a4-825b-e81ae7396111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-718178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-718178 --output=json --layout=cluster: exit status 7 (328.786399ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-718178","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-718178","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:14:12.388850 2598151 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-718178" does not appear in /home/jenkins/minikube-integration/17340-2494243/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-718178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-718178 --output=json --layout=cluster: exit status 7 (323.497372ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-718178","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-718178","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 12:14:12.713461 2598204 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-718178" does not appear in /home/jenkins/minikube-integration/17340-2494243/kubeconfig
	E1002 12:14:12.727589 2598204 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/insufficient-storage-718178/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-718178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-718178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-718178: (1.965738974s)
--- PASS: TestInsufficientStorage (10.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 12:19:24.740293 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:19:59.303471 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.614319398s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-832241
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-832241: (1.312468417s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-832241 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-832241 status --format={{.Host}}: exit status 7 (75.839526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m48.129846368s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-832241 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (122.820882ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-832241] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-832241
	    minikube start -p kubernetes-upgrade-832241 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8322412 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-832241 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-832241 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.12877787s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-832241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-832241
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-832241: (2.312518197s)
--- PASS: TestKubernetesUpgrade (383.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (83.08259ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-493221] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-493221 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-493221 --driver=docker  --container-runtime=crio: (46.922933464s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-493221 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-linux-arm64 -p NoKubernetes-493221 status -o json: (1.113197033s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --driver=docker  --container-runtime=crio: (13.12600664s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-493221 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-493221 status -o json: exit status 2 (426.821274ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-493221","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-493221
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-493221: (2.066295718s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-493221 --no-kubernetes --driver=docker  --container-runtime=crio: (8.120099444s)
--- PASS: TestNoKubernetes/serial/Start (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-493221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-493221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.998009ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-493221
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-493221: (1.238307453s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-493221 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-493221 --driver=docker  --container-runtime=crio: (7.486328415s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-493221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-493221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.734219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-409989 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-409989 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (304.0764ms)

                                                
                                                
-- stdout --
	* [false-409989] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 12:15:43.494489 2607347 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:15:43.494747 2607347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:43.494760 2607347 out.go:309] Setting ErrFile to fd 2...
	I1002 12:15:43.494766 2607347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:43.495104 2607347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2494243/.minikube/bin
	I1002 12:15:43.495602 2607347 out.go:303] Setting JSON to false
	I1002 12:15:43.496886 2607347 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71889,"bootTime":1696177054,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 12:15:43.496968 2607347 start.go:138] virtualization:  
	I1002 12:15:43.499266 2607347 out.go:177] * [false-409989] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 12:15:43.502062 2607347 notify.go:220] Checking for updates...
	I1002 12:15:43.504068 2607347 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:15:43.507778 2607347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:15:43.509577 2607347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2494243/kubeconfig
	I1002 12:15:43.511389 2607347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2494243/.minikube
	I1002 12:15:43.517385 2607347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 12:15:43.519354 2607347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:15:43.524330 2607347 config.go:182] Loaded profile config "missing-upgrade-402693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 12:15:43.524615 2607347 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:15:43.573568 2607347 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 12:15:43.573774 2607347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 12:15:43.719282 2607347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 12:15:43.706575824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 12:15:43.719393 2607347 docker.go:294] overlay module found
	I1002 12:15:43.722342 2607347 out.go:177] * Using the docker driver based on user configuration
	I1002 12:15:43.724118 2607347 start.go:298] selected driver: docker
	I1002 12:15:43.724144 2607347 start.go:902] validating driver "docker" against <nil>
	I1002 12:15:43.724165 2607347 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:15:43.726419 2607347 out.go:177] 
	W1002 12:15:43.728065 2607347 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 12:15:43.729842 2607347 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-409989 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-409989

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-409989"

                                                
                                                
----------------------- debugLogs end: false-409989 [took: 3.747374634s] --------------------------------
helpers_test.go:175: Cleaning up "false-409989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-409989
--- PASS: TestNetworkPlugins/group/false (4.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-998345
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestPause/serial/Start (78.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-668509 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1002 12:22:14.744898 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-668509 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.724163735s)
--- PASS: TestPause/serial/Start (78.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1002 12:24:24.744901 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.996154354s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.46197069s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2hx2d" [45649181-56d0-4f21-a2fb-a2e6c0a638e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2hx2d" [45649181-56d0-4f21-a2fb-a2e6c0a638e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.017424198s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.823604931s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8lgzb" [bf1ed1ac-32fe-4d0d-b1e0-54c136346f17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.076590236s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-409989 replace --force -f testdata/netcat-deployment.yaml
E1002 12:26:56.257013 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xzz8r" [70dbba52-4537-4ff7-8899-f30a810ec309] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xzz8r" [70dbba52-4537-4ff7-8899-f30a810ec309] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.018916404s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.108700769s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zg688" [a286e7fd-aca2-4bfe-827d-77ced705413b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.041169437s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ct48j" [e081ef04-930e-402f-ba34-099d9f7d8e6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ct48j" [e081ef04-930e-402f-ba34-099d9f7d8e6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.015447533s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m28.928325494s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bsk7x" [e0ad5426-2373-4b02-8728-89a64d8002cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bsk7x" [e0ad5426-2373-4b02-8728-89a64d8002cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.013021798s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.748326514s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f4wxl" [59529ffe-5975-48f5-b3eb-8501dcb02a20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f4wxl" [59529ffe-5975-48f5-b3eb-8501dcb02a20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011996944s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lhgw4" [6c792756-756e-40ce-942f-0addcef18961] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.042186013s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5std2" [cfff93cb-802d-4d87-b6f9-ffc697b34852] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5std2" [cfff93cb-802d-4d87-b6f9-ffc697b34852] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.026851512s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-409989 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (51.982919367s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-409989 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-302558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-302558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m24.766279995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-409989 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-409989 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ts8ph" [2a47fa04-78d2-43cd-839e-ec934f079bc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 12:31:33.475738 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ts8ph" [2a47fa04-78d2-43cd-839e-ec934f079bc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.016114744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-409989 exec deployment/netcat -- nslookup kubernetes.default
E1002 12:31:50.516602 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.521844 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.532092 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.552265 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.592513 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.672940 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:50.833279 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:51.153638 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:51.794672 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:53.075789 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:55.635985 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:31:56.257389 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:32:00.756211 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-409989 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.346761911s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-409989 exec deployment/netcat -- nslookup kubernetes.default
E1002 12:32:10.996372 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-409989 exec deployment/netcat -- nslookup kubernetes.default: (10.227706544s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-409989 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)
E1002 12:49:08.624136 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:49:11.456620 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:49:22.561695 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:49:24.740901 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:50:04.347279 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-494105 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:32:48.409729 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.415442 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.425684 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.445951 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.486259 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.566655 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:48.727799 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:49.048509 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:49.689355 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:50.970262 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:53.531173 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:32:58.651671 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:33:08.892201 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:33:12.436888 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:33:29.373336 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:33:36.356820 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-494105 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m18.45427436s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-302558 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0dc770a0-d4a7-4e1b-9989-5df1b7bf2eda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0dc770a0-d4a7-4e1b-9989-5df1b7bf2eda] Running
E1002 12:33:49.462738 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.468008 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.478332 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.498637 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.538911 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.619253 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:49.779705 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:50.099938 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:33:50.740174 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.034073144s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-302558 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-302558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 12:33:52.020359 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-302558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028704345s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-302558 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-302558 --alsologtostderr -v=3
E1002 12:33:54.581175 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-302558 --alsologtostderr -v=3: (12.277122314s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-494105 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5675c0bc-6828-42f4-a0aa-75c48d12f3bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5675c0bc-6828-42f4-a0aa-75c48d12f3bd] Running
E1002 12:33:59.701988 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.029912885s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-494105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-494105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-494105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064385926s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-494105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (15.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-494105 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-494105 --alsologtostderr -v=3: (15.270034468s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (15.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-302558 -n old-k8s-version-302558
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-302558 -n old-k8s-version-302558: exit status 7 (82.115254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-302558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (420.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-302558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1002 12:34:09.942443 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:34:10.334201 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-302558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m0.12157976s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-302558 -n old-k8s-version-302558
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (420.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105: exit status 7 (94.973088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-494105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (363.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-494105 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:34:24.740610 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:34:30.422724 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:34:34.358007 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:35:04.347050 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.352297 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.362624 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.382873 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.423198 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.503489 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.663895 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:04.984895 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:05.625034 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:06.905746 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:09.465930 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:11.383742 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:35:14.586688 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:24.827385 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:29.617070 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.622340 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.632643 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.652992 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.693278 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.773583 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:29.933877 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:30.254389 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:30.895477 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:32.176270 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:32.254739 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:35:34.736994 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:39.857747 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:45.307575 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:35:50.097990 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:35:52.510397 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
E1002 12:36:10.578931 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:36:20.197625 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
E1002 12:36:26.268066 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:36:32.738497 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:32.743823 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:32.754117 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:32.774358 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:32.814694 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:32.895052 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:33.055470 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:33.304802 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:36:33.376103 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:34.017192 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:35.298250 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:37.859438 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:39.303896 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:36:42.979649 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:50.517186 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:36:51.539150 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:36:53.220652 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:36:56.256988 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:37:13.700871 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:37:14.744626 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 12:37:18.199139 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:37:48.188714 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:37:48.410105 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:37:54.662469 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:38:13.459386 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:38:16.095874 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:38:49.462753 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:39:16.582811 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:39:17.145524 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:39:24.740819 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:40:04.347078 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-494105 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (6m3.04408631s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (363.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hn55m" [89d68d43-b870-4649-b6b5-e989eb11efe5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 12:40:29.617415 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hn55m" [89d68d43-b870-4649-b6b5-e989eb11efe5] Running
E1002 12:40:32.029143 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.028735762s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hn55m" [89d68d43-b870-4649-b6b5-e989eb11efe5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011104909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-494105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-494105 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-494105 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105: exit status 2 (358.525285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105: exit status 2 (358.204012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-494105 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-494105 -n default-k8s-diff-port-494105
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-634110 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:40:52.510839 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
E1002 12:40:57.299632 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-634110 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m30.972891595s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bphzf" [8c562ca8-0093-456b-bbd1-08e38a9582ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.03373523s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bphzf" [8c562ca8-0093-456b-bbd1-08e38a9582ce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011560825s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-302558 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-302558 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-302558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-302558 --alsologtostderr -v=1: (1.469532472s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-302558 -n old-k8s-version-302558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-302558 -n old-k8s-version-302558: exit status 2 (458.014479ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-302558 -n old-k8s-version-302558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-302558 -n old-k8s-version-302558: exit status 2 (464.285225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-302558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-302558 --alsologtostderr -v=1: (1.212658719s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-302558 -n old-k8s-version-302558
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-302558 -n old-k8s-version-302558
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-891493 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:41:32.738540 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:41:50.516726 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:41:56.256728 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:42:00.423168 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-891493 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (45.203544643s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-891493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-891493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189889657s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-891493 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-891493 --alsologtostderr -v=3: (1.280946602s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-891493 -n newest-cni-891493
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-891493 -n newest-cni-891493: exit status 7 (75.971941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-891493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-891493 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:42:14.744435 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-891493 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (33.610780339s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-891493 -n newest-cni-891493
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-634110 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [749b28ef-8e00-4341-93f4-f6e5c9260b6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [749b28ef-8e00-4341-93f4-f6e5c9260b6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.044992458s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-634110 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-634110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-634110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144508227s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-634110 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-634110 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-634110 --alsologtostderr -v=3: (12.377608591s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634110 -n embed-certs-634110
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634110 -n embed-certs-634110: exit status 7 (134.589927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-634110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (356.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-634110 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-634110 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m55.610610509s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634110 -n embed-certs-634110
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (356.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-891493 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-891493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-891493 -n newest-cni-891493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-891493 -n newest-cni-891493: exit status 2 (399.543951ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-891493 -n newest-cni-891493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-891493 -n newest-cni-891493: exit status 2 (432.865266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-891493 --alsologtostderr -v=1
E1002 12:42:48.409641 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-891493 -n newest-cni-891493
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-891493 -n newest-cni-891493
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-814881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:43:40.941509 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:40.946745 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:40.956959 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:40.977220 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:41.017468 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:41.097707 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:41.258253 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:41.579090 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:42.219776 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:43.500305 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:46.060962 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:49.462460 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
E1002 12:43:51.181139 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:43:54.878570 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:54.883845 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:54.894072 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:54.914318 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:54.954571 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:55.034877 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:55.195236 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:55.515770 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:56.155920 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:57.436201 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:43:59.996675 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-814881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m6.68865039s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-814881 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7bda12f3-0d1a-4be5-8900-4b0e631b094d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 12:44:01.422125 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7bda12f3-0d1a-4be5-8900-4b0e631b094d] Running
E1002 12:44:05.118340 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:44:07.787364 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.042800891s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-814881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-814881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-814881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.165543101s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-814881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-814881 --alsologtostderr -v=3
E1002 12:44:15.359152 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:44:21.902289 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-814881 --alsologtostderr -v=3: (12.17032305s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-814881 -n no-preload-814881
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-814881 -n no-preload-814881: exit status 7 (78.41306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-814881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-814881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:44:24.740756 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/functional-262988/client.crt: no such file or directory
E1002 12:44:35.840287 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:45:02.863322 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:45:04.347162 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/enable-default-cni-409989/client.crt: no such file or directory
E1002 12:45:16.800479 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:45:29.617336 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
E1002 12:45:52.510889 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
E1002 12:46:24.783501 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
E1002 12:46:32.738976 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/bridge-409989/client.crt: no such file or directory
E1002 12:46:38.720682 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
E1002 12:46:50.516639 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
E1002 12:46:56.256910 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/addons-346248/client.crt: no such file or directory
E1002 12:46:57.793329 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 12:47:14.744650 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/ingress-addon-legacy-999051/client.crt: no such file or directory
E1002 12:47:15.558520 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/auto-409989/client.crt: no such file or directory
E1002 12:47:48.409721 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/calico-409989/client.crt: no such file or directory
E1002 12:48:13.559573 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/kindnet-409989/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-814881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m45.718401348s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-814881 -n no-preload-814881
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f26bj" [6ad17fb6-a20c-4f44-a71c-5c72f26bdf30] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 12:48:40.941492 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/old-k8s-version-302558/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f26bj" [6ad17fb6-a20c-4f44-a71c-5c72f26bdf30] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.042283946s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f26bj" [6ad17fb6-a20c-4f44-a71c-5c72f26bdf30] Running
E1002 12:48:49.462004 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015756644s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-634110 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-634110 "sudo crictl images -o json"
E1002 12:48:54.878313 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/default-k8s-diff-port-494105/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-634110 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634110 -n embed-certs-634110
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634110 -n embed-certs-634110: exit status 2 (370.407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634110 -n embed-certs-634110
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634110 -n embed-certs-634110: exit status 2 (362.831261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-634110 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634110 -n embed-certs-634110
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634110 -n embed-certs-634110
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6vstw" [8611d00f-4412-4a57-bf40-5258a1af6a43] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 12:50:12.505728 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/custom-flannel-409989/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6vstw" [8611d00f-4412-4a57-bf40-5258a1af6a43] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.02662046s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6vstw" [8611d00f-4412-4a57-bf40-5258a1af6a43] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010613739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-814881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-814881 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-814881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-814881 -n no-preload-814881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-814881 -n no-preload-814881: exit status 2 (334.293198ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-814881 -n no-preload-814881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-814881 -n no-preload-814881: exit status 2 (349.722513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-814881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-814881 -n no-preload-814881
E1002 12:50:29.619096 2499598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/flannel-409989/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-814881 -n no-preload-814881
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                    

Test skip (29/299)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-791861 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-791861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-791861
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:422: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-409989 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-409989

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-409989"

                                                
                                                
----------------------- debugLogs end: kubenet-409989 [took: 4.406767005s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-409989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-409989
--- SKIP: TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-409989 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-409989" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17340-2494243/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Oct 2023 12:15:48 UTC
provider: minikube.sigs.k8s.io
version: v1.17.0
name: cluster_info
server: https://192.168.59.6:8443
name: missing-upgrade-402693
contexts:
- context:
cluster: missing-upgrade-402693
extensions:
- extension:
last-update: Mon, 02 Oct 2023 12:15:48 UTC
provider: minikube.sigs.k8s.io
version: v1.17.0
name: context_info
namespace: default
user: missing-upgrade-402693
name: missing-upgrade-402693
current-context: missing-upgrade-402693
kind: Config
preferences: {}
users:
- name: missing-upgrade-402693
user:
client-certificate: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/missing-upgrade-402693/client.crt
client-key: /home/jenkins/minikube-integration/17340-2494243/.minikube/profiles/missing-upgrade-402693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-409989

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-409989" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409989"

                                                
                                                
----------------------- debugLogs end: cilium-409989 [took: 4.516942775s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-409989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-409989
--- SKIP: TestNetworkPlugins/group/cilium (4.69s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-951415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-951415
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard